Feb 13 18:58:10.991573 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 18:58:10.991595 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:58:10.991606 kernel: KASLR enabled Feb 13 18:58:10.991611 kernel: efi: EFI v2.7 by EDK II Feb 13 18:58:10.991617 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 18:58:10.991622 kernel: random: crng init done Feb 13 18:58:10.991629 kernel: secureboot: Secure boot disabled Feb 13 18:58:10.991635 kernel: ACPI: Early table checksum verification disabled Feb 13 18:58:10.991641 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 18:58:10.991648 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 18:58:10.991654 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991660 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991666 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991672 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991679 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991686 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991693 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991699 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991704 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:58:10.991710 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 18:58:10.991716 kernel: NUMA: Failed to initialise from firmware Feb 13 18:58:10.991735 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:58:10.991741 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Feb 13 18:58:10.991747 kernel: Zone ranges: Feb 13 18:58:10.991753 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:58:10.991760 kernel: DMA32 empty Feb 13 18:58:10.991766 kernel: Normal empty Feb 13 18:58:10.991772 kernel: Movable zone start for each node Feb 13 18:58:10.991778 kernel: Early memory node ranges Feb 13 18:58:10.991784 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 18:58:10.991790 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 18:58:10.991796 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 18:58:10.991802 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 18:58:10.991817 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 18:58:10.991823 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 18:58:10.991830 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 18:58:10.991836 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 18:58:10.991844 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 18:58:10.991850 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:58:10.991857 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 18:58:10.991865 kernel: psci: probing for conduit method from ACPI. Feb 13 18:58:10.991872 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 18:58:10.991879 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:58:10.991887 kernel: psci: Trusted OS migration not required Feb 13 18:58:10.991894 kernel: psci: SMC Calling Convention v1.1 Feb 13 18:58:10.991900 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 18:58:10.991907 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:58:10.991913 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:58:10.991920 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 18:58:10.991926 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:58:10.991933 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:58:10.991939 kernel: CPU features: detected: Hardware dirty bit management Feb 13 18:58:10.991946 kernel: CPU features: detected: Spectre-v4 Feb 13 18:58:10.991953 kernel: CPU features: detected: Spectre-BHB Feb 13 18:58:10.991960 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 18:58:10.991966 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 18:58:10.991972 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 18:58:10.991979 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 18:58:10.991985 kernel: alternatives: applying boot alternatives Feb 13 18:58:10.991992 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:10.991999 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:58:10.992006 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:58:10.992012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:58:10.992019 kernel: Fallback order for Node 0: 0 Feb 13 18:58:10.992027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 18:58:10.992033 kernel: Policy zone: DMA Feb 13 18:58:10.992040 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:58:10.992046 kernel: software IO TLB: area num 4. Feb 13 18:58:10.992052 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 18:58:10.992059 kernel: Memory: 2385948K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186340K reserved, 0K cma-reserved) Feb 13 18:58:10.992065 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 18:58:10.992071 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:58:10.992078 kernel: rcu: RCU event tracing is enabled. Feb 13 18:58:10.992085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 18:58:10.992091 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:58:10.992098 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:58:10.992106 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:58:10.992112 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 18:58:10.992119 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:58:10.992125 kernel: GICv3: 256 SPIs implemented Feb 13 18:58:10.992131 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:58:10.992137 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:58:10.992144 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 18:58:10.992150 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 18:58:10.992156 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 18:58:10.992163 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 18:58:10.992169 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 18:58:10.992177 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 18:58:10.992184 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 18:58:10.992190 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:58:10.992196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:10.992203 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 18:58:10.992209 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 18:58:10.992216 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 18:58:10.992222 kernel: arm-pv: using stolen time PV Feb 13 18:58:10.992229 kernel: Console: colour dummy device 80x25 Feb 13 18:58:10.992235 kernel: ACPI: Core revision 20230628 Feb 13 18:58:10.992242 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 18:58:10.992250 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:58:10.992257 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:58:10.992263 kernel: landlock: Up and running. Feb 13 18:58:10.992270 kernel: SELinux: Initializing. Feb 13 18:58:10.992276 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:10.992283 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:58:10.992290 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:58:10.992296 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:58:10.992303 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:58:10.992311 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:58:10.992317 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 18:58:10.992324 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 18:58:10.992330 kernel: Remapping and enabling EFI services. Feb 13 18:58:10.992337 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:58:10.992343 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:58:10.992350 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 18:58:10.992356 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 18:58:10.992363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:10.992380 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 18:58:10.992387 kernel: Detected PIPT I-cache on CPU2 Feb 13 18:58:10.992399 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 18:58:10.992408 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 18:58:10.992415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:10.992422 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 18:58:10.992429 kernel: Detected PIPT I-cache on CPU3 Feb 13 18:58:10.992435 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 18:58:10.992443 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 18:58:10.992451 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:58:10.992458 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 18:58:10.992465 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 18:58:10.992471 kernel: SMP: Total of 4 processors activated. Feb 13 18:58:10.992478 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:58:10.992485 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 18:58:10.992492 kernel: CPU features: detected: Common not Private translations Feb 13 18:58:10.992499 kernel: CPU features: detected: CRC32 instructions Feb 13 18:58:10.992507 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 18:58:10.992514 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 18:58:10.992521 kernel: CPU features: detected: LSE atomic instructions Feb 13 18:58:10.992528 kernel: CPU features: detected: Privileged Access Never Feb 13 18:58:10.992535 kernel: CPU features: detected: RAS Extension Support Feb 13 18:58:10.992542 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 18:58:10.992549 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:58:10.992556 kernel: alternatives: applying system-wide alternatives Feb 13 18:58:10.992563 kernel: devtmpfs: initialized Feb 13 18:58:10.992572 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:58:10.992579 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 18:58:10.992586 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:58:10.992593 kernel: SMBIOS 3.0.0 present. Feb 13 18:58:10.992599 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 18:58:10.992606 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:58:10.992613 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:58:10.992620 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:58:10.992627 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:58:10.992636 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:58:10.992648 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 18:58:10.992655 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:58:10.992662 kernel: cpuidle: using governor menu Feb 13 18:58:10.992669 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:58:10.992676 kernel: ASID allocator initialised with 32768 entries Feb 13 18:58:10.992683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:58:10.992689 kernel: Serial: AMBA PL011 UART driver Feb 13 18:58:10.992697 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 18:58:10.992705 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 18:58:10.992712 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:58:10.992719 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:58:10.992726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:58:10.992733 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:58:10.992739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:58:10.992746 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:58:10.992754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:58:10.992761 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:58:10.992769 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:58:10.992776 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:58:10.992783 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:58:10.992790 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:58:10.992798 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:58:10.992810 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:58:10.992818 kernel: ACPI: Interpreter enabled Feb 13 18:58:10.992825 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:58:10.992832 kernel: ACPI: MCFG table detected, 1 entries Feb 13 18:58:10.992839 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 18:58:10.992848 kernel: printk: console [ttyAMA0] enabled Feb 13 18:58:10.992855 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 18:58:10.992995 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 18:58:10.993068 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 18:58:10.993133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 18:58:10.993197 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 18:58:10.993271 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 18:58:10.993283 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 18:58:10.993290 kernel: PCI host bridge to bus 0000:00 Feb 13 18:58:10.993361 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 18:58:10.993434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 18:58:10.993493 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 18:58:10.993552 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 18:58:10.993632 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 18:58:10.993716 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 18:58:10.993785 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 18:58:10.993862 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 18:58:10.993929 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:58:10.993994 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:58:10.994059 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 18:58:10.994129 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 18:58:10.994188 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 18:58:10.994246 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 18:58:10.994304 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 18:58:10.994313 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 18:58:10.994320 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 18:58:10.994328 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 18:58:10.994335 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 18:58:10.994344 kernel: iommu: Default domain type: Translated Feb 13 18:58:10.994351 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:58:10.994358 kernel: efivars: Registered efivars operations Feb 13 18:58:10.994373 kernel: vgaarb: loaded Feb 13 18:58:10.994381 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:58:10.994388 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:58:10.994396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:58:10.994403 kernel: pnp: PnP ACPI init Feb 13 18:58:10.994476 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 18:58:10.994488 kernel: pnp: PnP ACPI: found 1 devices Feb 13 18:58:10.994495 kernel: NET: Registered PF_INET protocol family Feb 13 18:58:10.994502 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:58:10.994510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:58:10.994517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:58:10.994524 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:58:10.994532 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:58:10.994539 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:58:10.994547 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:10.994555 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:58:10.994562 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:58:10.994568 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:58:10.994575 kernel: kvm [1]: HYP mode not available Feb 13 18:58:10.994582 kernel: Initialise system trusted keyrings Feb 13 18:58:10.994589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:58:10.994596 kernel: Key type asymmetric registered Feb 13 18:58:10.994603 kernel: Asymmetric key parser 'x509' registered Feb 13 18:58:10.994611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:58:10.994618 kernel: io scheduler mq-deadline registered Feb 13 18:58:10.994625 kernel: io scheduler kyber registered Feb 13 18:58:10.994632 kernel: io scheduler bfq registered Feb 13 18:58:10.994639 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 18:58:10.994646 kernel: ACPI: button: Power Button [PWRB] Feb 13 18:58:10.994654 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 18:58:10.994720 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 18:58:10.994730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:58:10.994739 kernel: thunder_xcv, ver 1.0 Feb 13 18:58:10.994746 kernel: thunder_bgx, ver 1.0 Feb 13 18:58:10.994753 kernel: nicpf, ver 1.0 Feb 13 18:58:10.994760 kernel: nicvf, ver 1.0 Feb 13 18:58:10.994845 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:58:10.994910 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:58:10 UTC (1739473090) Feb 13 18:58:10.994919 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:58:10.994927 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 18:58:10.994936 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:58:10.994946 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:58:10.994953 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:58:10.994960 kernel: Segment Routing with IPv6 Feb 13 18:58:10.994967 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:58:10.994974 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:58:10.994982 kernel: Key type dns_resolver registered Feb 13 18:58:10.994989 kernel: registered taskstats version 1 Feb 13 18:58:10.994996 kernel: Loading compiled-in X.509 certificates Feb 13 18:58:10.995003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:58:10.995011 kernel: Key type .fscrypt registered Feb 13 18:58:10.995018 kernel: Key type fscrypt-provisioning registered Feb 13 18:58:10.995026 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:58:10.995033 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:58:10.995040 kernel: ima: No architecture policies found Feb 13 18:58:10.995047 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:58:10.995053 kernel: clk: Disabling unused clocks Feb 13 18:58:10.995060 kernel: Freeing unused kernel memory: 39936K Feb 13 18:58:10.995069 kernel: Run /init as init process Feb 13 18:58:10.995076 kernel: with arguments: Feb 13 18:58:10.995083 kernel: /init Feb 13 18:58:10.995090 kernel: with environment: Feb 13 18:58:10.995096 kernel: HOME=/ Feb 13 18:58:10.995103 kernel: TERM=linux Feb 13 18:58:10.995110 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:58:10.995118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:58:10.995129 systemd[1]: Detected virtualization kvm. Feb 13 18:58:10.995136 systemd[1]: Detected architecture arm64. Feb 13 18:58:10.995143 systemd[1]: Running in initrd. Feb 13 18:58:10.995151 systemd[1]: No hostname configured, using default hostname. Feb 13 18:58:10.995158 systemd[1]: Hostname set to . Feb 13 18:58:10.995166 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:58:10.995173 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:58:10.995181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:10.995190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:10.995197 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:58:10.995205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:58:10.995213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:58:10.995220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:58:10.995230 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:58:10.995238 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:58:10.995248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:10.995256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:10.995263 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:58:10.995270 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:58:10.995278 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:58:10.995285 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:58:10.995293 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:10.995300 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:10.995308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:58:10.995317 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:58:10.995325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:10.995333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:10.995340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:10.995348 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:58:10.995356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:58:10.995363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:58:10.995408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:58:10.995418 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:58:10.995425 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:58:10.995433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:58:10.995440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:10.995448 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:10.995455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:10.995463 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:58:10.995473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:58:10.995480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:10.995506 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 18:58:10.995526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:10.995535 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:58:10.995543 systemd-journald[238]: Journal started Feb 13 18:58:10.995565 systemd-journald[238]: Runtime Journal (/run/log/journal/d7e25301356d44f099ed470fcf3af538) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:58:10.977056 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 18:58:10.997080 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:58:10.999398 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:58:11.000194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:58:11.004202 kernel: Bridge firewalling registered Feb 13 18:58:11.001526 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 18:58:11.002322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:58:11.003473 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:11.008087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:58:11.013432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:11.018364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:11.019481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:11.021106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:11.027519 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:58:11.029441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:58:11.038837 dracut-cmdline[279]: dracut-dracut-053 Feb 13 18:58:11.041307 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:58:11.056756 systemd-resolved[280]: Positive Trust Anchors: Feb 13 18:58:11.056772 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:58:11.056803 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:58:11.061566 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 18:58:11.062516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:58:11.064258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:11.111382 kernel: SCSI subsystem initialized Feb 13 18:58:11.114385 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:58:11.122405 kernel: iscsi: registered transport (tcp) Feb 13 18:58:11.136720 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:58:11.136754 kernel: QLogic iSCSI HBA Driver Feb 13 18:58:11.183870 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:11.195543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:58:11.213859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:58:11.213930 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:58:11.213942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:58:11.259395 kernel: raid6: neonx8 gen() 15770 MB/s Feb 13 18:58:11.276383 kernel: raid6: neonx4 gen() 15785 MB/s Feb 13 18:58:11.293382 kernel: raid6: neonx2 gen() 13322 MB/s Feb 13 18:58:11.310381 kernel: raid6: neonx1 gen() 10485 MB/s Feb 13 18:58:11.327378 kernel: raid6: int64x8 gen() 6791 MB/s Feb 13 18:58:11.344379 kernel: raid6: int64x4 gen() 7340 MB/s Feb 13 18:58:11.361381 kernel: raid6: int64x2 gen() 6106 MB/s Feb 13 18:58:11.378381 kernel: raid6: int64x1 gen() 5050 MB/s Feb 13 18:58:11.378398 kernel: raid6: using algorithm neonx4 gen() 15785 MB/s Feb 13 18:58:11.395387 kernel: raid6: .... xor() 12326 MB/s, rmw enabled Feb 13 18:58:11.395400 kernel: raid6: using neon recovery algorithm Feb 13 18:58:11.400386 kernel: xor: measuring software checksum speed Feb 13 18:58:11.400404 kernel: 8regs : 20902 MB/sec Feb 13 18:58:11.400413 kernel: 32regs : 21687 MB/sec Feb 13 18:58:11.401692 kernel: arm64_neon : 26945 MB/sec Feb 13 18:58:11.401704 kernel: xor: using function: arm64_neon (26945 MB/sec) Feb 13 18:58:11.452676 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:58:11.464199 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:11.480602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:11.492110 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 18:58:11.495238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:11.503555 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:58:11.515435 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 18:58:11.544625 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:11.552538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:58:11.591618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:11.598537 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:58:11.611775 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:11.614639 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:11.615754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:11.618128 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:58:11.629645 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:58:11.641437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:11.644862 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 18:58:11.655786 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 18:58:11.655912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 18:58:11.655925 kernel: GPT:9289727 != 19775487 Feb 13 18:58:11.655934 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 18:58:11.655953 kernel: GPT:9289727 != 19775487 Feb 13 18:58:11.655961 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 18:58:11.655970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:58:11.654412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:11.654592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:11.662770 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:11.664163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:11.664332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:11.668656 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:11.678395 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Feb 13 18:58:11.680399 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (515) Feb 13 18:58:11.684719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:11.696205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 18:58:11.697523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:11.702704 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 18:58:11.712402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:58:11.715953 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 18:58:11.716992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 18:58:11.733567 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:58:11.735661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:58:11.739907 disk-uuid[552]: Primary Header is updated. Feb 13 18:58:11.739907 disk-uuid[552]: Secondary Entries is updated. Feb 13 18:58:11.739907 disk-uuid[552]: Secondary Header is updated. Feb 13 18:58:11.745256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:58:11.765134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:12.753392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:58:12.754320 disk-uuid[553]: The operation has completed successfully. Feb 13 18:58:12.776004 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:58:12.776109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:58:12.799534 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:58:12.802255 sh[573]: Success Feb 13 18:58:12.813465 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:58:12.844457 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:58:12.857711 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:58:12.859855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:58:12.869857 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:58:12.869897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:12.871825 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:58:12.871849 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:58:12.871860 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:58:12.874952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:58:12.876096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:58:12.876859 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:58:12.878844 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:58:12.890207 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:12.890252 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:12.890270 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:58:12.892390 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:58:12.900444 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:58:12.901710 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:12.907026 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:58:12.912553 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:58:12.970537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:12.990619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:58:13.018933 systemd-networkd[759]: lo: Link UP Feb 13 18:58:13.018944 systemd-networkd[759]: lo: Gained carrier Feb 13 18:58:13.019834 systemd-networkd[759]: Enumeration completed Feb 13 18:58:13.019954 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:58:13.020541 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:13.020544 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:58:13.021395 systemd[1]: Reached target network.target - Network. Feb 13 18:58:13.021590 systemd-networkd[759]: eth0: Link UP Feb 13 18:58:13.021593 systemd-networkd[759]: eth0: Gained carrier Feb 13 18:58:13.021601 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:13.028045 ignition[671]: Ignition 2.20.0 Feb 13 18:58:13.028052 ignition[671]: Stage: fetch-offline Feb 13 18:58:13.028088 ignition[671]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:13.028096 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:13.028254 ignition[671]: parsed url from cmdline: "" Feb 13 18:58:13.028257 ignition[671]: no config URL provided Feb 13 18:58:13.028262 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:58:13.028268 ignition[671]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:58:13.028297 ignition[671]: op(1): [started] loading QEMU firmware config module Feb 13 18:58:13.028302 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 18:58:13.035600 ignition[671]: op(1): [finished] loading QEMU firmware config module Feb 13 18:58:13.035623 ignition[671]: QEMU firmware config was not found. Ignoring... Feb 13 18:58:13.046449 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:58:13.076277 ignition[671]: parsing config with SHA512: 9d01f3a6a0e6dd0b825ad9f6e919bd7791e7484f40b8b540e72c43ae3627fb8c1110d2029e1a1d32cdb99f050cba4a63318e8b5c1fc0d7ab3f7ec47e491a2a4e Feb 13 18:58:13.080931 unknown[671]: fetched base config from "system" Feb 13 18:58:13.080941 unknown[671]: fetched user config from "qemu" Feb 13 18:58:13.081322 ignition[671]: fetch-offline: fetch-offline passed Feb 13 18:58:13.081412 ignition[671]: Ignition finished successfully Feb 13 18:58:13.083669 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:13.084765 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 18:58:13.092560 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:58:13.102905 ignition[770]: Ignition 2.20.0 Feb 13 18:58:13.102916 ignition[770]: Stage: kargs Feb 13 18:58:13.103078 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:13.103088 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:13.104022 ignition[770]: kargs: kargs passed Feb 13 18:58:13.104068 ignition[770]: Ignition finished successfully Feb 13 18:58:13.106105 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:58:13.117575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:58:13.126980 ignition[778]: Ignition 2.20.0 Feb 13 18:58:13.126991 ignition[778]: Stage: disks Feb 13 18:58:13.127152 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:13.127161 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:13.128157 ignition[778]: disks: disks passed Feb 13 18:58:13.128204 ignition[778]: Ignition finished successfully Feb 13 18:58:13.131438 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:58:13.133118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:13.133955 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:58:13.135467 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:58:13.136881 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:58:13.138248 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:58:13.145529 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:58:13.157667 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 18:58:13.163324 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:58:13.175509 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:58:13.224383 kernel: EXT4-fs (vda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:58:13.224410 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:58:13.225727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:58:13.242464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:13.244206 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:58:13.245180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 18:58:13.245220 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:58:13.245242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:13.251233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:58:13.254097 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 18:58:13.254119 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:13.254129 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:13.253453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:58:13.256912 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:58:13.259384 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:58:13.260650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:13.306157 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:58:13.310522 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:58:13.314648 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:58:13.318735 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:58:13.406523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:13.416530 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:58:13.418079 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:58:13.423398 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:13.441538 ignition[912]: INFO : Ignition 2.20.0 Feb 13 18:58:13.441538 ignition[912]: INFO : Stage: mount Feb 13 18:58:13.442825 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:13.442825 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:13.442825 ignition[912]: INFO : mount: mount passed Feb 13 18:58:13.442825 ignition[912]: INFO : Ignition finished successfully Feb 13 18:58:13.443812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:58:13.445349 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:58:13.460522 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:58:13.869149 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:58:13.881560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:58:13.888383 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Feb 13 18:58:13.890731 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:58:13.890752 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:58:13.890762 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:58:13.893397 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:58:13.893899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:58:13.916139 ignition[943]: INFO : Ignition 2.20.0 Feb 13 18:58:13.916139 ignition[943]: INFO : Stage: files Feb 13 18:58:13.917406 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:13.917406 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:13.917406 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:58:13.920012 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:58:13.920012 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:58:13.923615 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:58:13.924745 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:58:13.924745 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:58:13.924241 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 18:58:13.927552 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 18:58:13.927552 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 18:58:14.155501 systemd-networkd[759]: eth0: Gained IPv6LL Feb 13 18:58:14.528785 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 18:58:14.861057 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 18:58:14.861057 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 18:58:14.864087 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 18:58:15.069303 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 18:58:15.121455 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 18:58:15.121455 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:58:15.124151 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 18:58:15.341339 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 18:58:15.552659 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:58:15.552659 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 18:58:15.555564 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 18:58:15.578473 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:58:15.582328 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:58:15.584517 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 18:58:15.584517 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 18:58:15.584517 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 18:58:15.584517 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:15.584517 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:58:15.584517 ignition[943]: INFO : files: files passed Feb 13 18:58:15.584517 ignition[943]: INFO : Ignition finished successfully Feb 13 18:58:15.585910 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:58:15.596547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:58:15.598761 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:58:15.600854 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:58:15.600942 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:58:15.606788 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 18:58:15.609159 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:15.609159 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:15.611572 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:58:15.611563 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:15.612686 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:58:15.627883 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:58:15.645651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:58:15.645759 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:58:15.647399 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:58:15.648699 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:58:15.650015 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:58:15.650743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:58:15.665096 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:15.667304 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:58:15.678685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:15.680377 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:15.681340 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:58:15.682680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:58:15.682802 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:58:15.684651 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:58:15.686147 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:58:15.687362 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:58:15.688676 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:58:15.690137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:58:15.691711 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:58:15.693130 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:58:15.694710 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:58:15.696166 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:58:15.697482 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:58:15.698651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:58:15.698773 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:58:15.700569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:15.701998 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:15.703512 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:58:15.704498 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:15.705782 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:58:15.705901 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:58:15.707952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:58:15.708066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:58:15.709478 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:58:15.710664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:58:15.715430 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:15.716516 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:58:15.718051 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:58:15.719210 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:58:15.719298 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:58:15.720473 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:58:15.720550 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:58:15.721767 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:58:15.721879 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:58:15.723252 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:58:15.723347 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:58:15.734606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:58:15.736624 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:58:15.737263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:58:15.737389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:15.738719 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:58:15.738820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:58:15.743835 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:58:15.745413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:58:15.749424 ignition[1000]: INFO : Ignition 2.20.0 Feb 13 18:58:15.749424 ignition[1000]: INFO : Stage: umount Feb 13 18:58:15.751485 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:58:15.751485 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:58:15.751485 ignition[1000]: INFO : umount: umount passed Feb 13 18:58:15.751485 ignition[1000]: INFO : Ignition finished successfully Feb 13 18:58:15.752259 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:58:15.752733 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:58:15.752827 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:58:15.754339 systemd[1]: Stopped target network.target - Network. Feb 13 18:58:15.755156 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:58:15.755251 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:58:15.756527 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:58:15.756564 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:58:15.757976 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:58:15.758015 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:58:15.759277 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:58:15.759318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:58:15.760756 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:58:15.761887 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:58:15.769501 systemd-networkd[759]: eth0: DHCPv6 lease lost Feb 13 18:58:15.771112 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:58:15.771237 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:58:15.772832 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:58:15.772908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:58:15.789689 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:58:15.789730 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:15.798534 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:58:15.799308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:58:15.799380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:58:15.800988 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:58:15.801027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:15.803875 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:58:15.803929 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:15.805507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:58:15.805552 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:15.807064 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:15.816313 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:58:15.816429 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:58:15.827016 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:58:15.827156 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:15.828893 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:58:15.828930 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:15.830247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:58:15.830276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:15.831635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:58:15.831678 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:58:15.833680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:58:15.833723 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:58:15.835707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:58:15.835746 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:58:15.848546 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:58:15.849474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:58:15.849526 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:15.851313 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 18:58:15.851351 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:58:15.853349 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:58:15.853404 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:15.855120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:58:15.855162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:15.857551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:58:15.857642 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:58:15.883470 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:58:15.883578 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:58:15.885123 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:58:15.886256 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:58:15.886307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:58:15.899030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:58:15.905005 systemd[1]: Switching root. Feb 13 18:58:15.926155 systemd-journald[238]: Journal stopped Feb 13 18:58:16.728419 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 18:58:16.728474 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:58:16.728486 kernel: SELinux: policy capability open_perms=1 Feb 13 18:58:16.728498 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:58:16.728507 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:58:16.728516 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:58:16.728529 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:58:16.728538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:58:16.728547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:58:16.728556 kernel: audit: type=1403 audit(1739473096.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:58:16.728568 systemd[1]: Successfully loaded SELinux policy in 38.756ms. Feb 13 18:58:16.728587 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.226ms. Feb 13 18:58:16.728598 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:58:16.728610 systemd[1]: Detected virtualization kvm. Feb 13 18:58:16.728620 systemd[1]: Detected architecture arm64. Feb 13 18:58:16.728630 systemd[1]: Detected first boot. Feb 13 18:58:16.728640 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:58:16.728650 zram_generator::config[1046]: No configuration found. Feb 13 18:58:16.728662 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:58:16.728672 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:58:16.728682 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:58:16.728692 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:58:16.728703 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:58:16.728713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:58:16.728724 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:58:16.728735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:58:16.728745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:58:16.728757 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:58:16.728767 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:58:16.728777 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:58:16.728787 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:58:16.728805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:58:16.728818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:58:16.728828 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:58:16.728838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:58:16.728848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:58:16.728860 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 18:58:16.728870 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:58:16.728880 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:58:16.728890 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:58:16.728901 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:58:16.728911 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:58:16.728921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:58:16.728933 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:58:16.728943 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:58:16.728955 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:58:16.728965 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:58:16.728975 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:58:16.728986 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:58:16.728996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:58:16.729006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:58:16.729017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:58:16.729027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:58:16.729039 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:58:16.729050 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:58:16.729060 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:58:16.729070 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:58:16.729080 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:58:16.729090 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:58:16.729100 systemd[1]: Reached target machines.target - Containers. Feb 13 18:58:16.729110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:58:16.729122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:16.729132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:58:16.729143 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:58:16.729153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:16.729163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:58:16.729174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:16.729184 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:58:16.729194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:16.729205 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:58:16.729216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:58:16.729227 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:58:16.729237 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:58:16.729247 kernel: fuse: init (API version 7.39) Feb 13 18:58:16.729256 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:58:16.729266 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:58:16.729276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:58:16.729286 kernel: ACPI: bus type drm_connector registered Feb 13 18:58:16.729298 kernel: loop: module loaded Feb 13 18:58:16.729310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:58:16.729320 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:58:16.729330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:58:16.729340 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:58:16.729350 systemd[1]: Stopped verity-setup.service. Feb 13 18:58:16.729360 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:58:16.729484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:58:16.729499 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:58:16.729513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:58:16.729523 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:58:16.729533 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:58:16.729565 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 18:58:16.729589 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:58:16.729600 systemd-journald[1113]: Journal started Feb 13 18:58:16.729627 systemd-journald[1113]: Runtime Journal (/run/log/journal/d7e25301356d44f099ed470fcf3af538) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:58:16.552941 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:58:16.566697 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 18:58:16.567044 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:58:16.732400 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:58:16.733079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:58:16.734276 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:58:16.734442 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:58:16.735611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:16.735744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:16.736829 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:58:16.736962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:58:16.737997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:16.738135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:16.739268 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:58:16.739419 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:58:16.740567 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:16.740701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:16.741744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:58:16.742885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:58:16.744050 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:58:16.756465 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:58:16.769469 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:58:16.771279 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:58:16.772152 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:58:16.772190 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:58:16.774025 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:58:16.775991 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:58:16.777773 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:58:16.778639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:16.780139 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:58:16.781838 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:58:16.782722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:58:16.786527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:58:16.787436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:58:16.792469 systemd-journald[1113]: Time spent on flushing to /var/log/journal/d7e25301356d44f099ed470fcf3af538 is 11.550ms for 861 entries. Feb 13 18:58:16.792469 systemd-journald[1113]: System Journal (/var/log/journal/d7e25301356d44f099ed470fcf3af538) is 8.0M, max 195.6M, 187.6M free. Feb 13 18:58:16.812610 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 18:58:16.792183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:58:16.798646 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:58:16.801493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:58:16.805610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:58:16.807631 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:58:16.808733 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:58:16.809947 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:58:16.811707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:58:16.815254 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:58:16.819041 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:58:16.829401 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 18:58:16.832403 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:58:16.835601 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:58:16.839618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:58:16.840385 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:58:16.850438 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 18:58:16.852981 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 18:58:16.853056 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 18:58:16.856610 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:58:16.857230 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:58:16.861849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:58:16.870577 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:58:16.878385 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 18:58:16.896537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:58:16.907558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:58:16.913498 kernel: loop2: detected capacity change from 0 to 116784 Feb 13 18:58:16.920120 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 18:58:16.920138 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 18:58:16.924184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:58:16.964405 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 18:58:16.972389 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 18:58:16.978386 kernel: loop5: detected capacity change from 0 to 116784 Feb 13 18:58:16.983009 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 18:58:16.983395 (sd-merge)[1185]: Merged extensions into '/usr'. Feb 13 18:58:16.987059 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:58:16.987166 systemd[1]: Reloading... Feb 13 18:58:17.041387 zram_generator::config[1210]: No configuration found. Feb 13 18:58:17.080759 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:58:17.134411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:17.170234 systemd[1]: Reloading finished in 182 ms. Feb 13 18:58:17.202588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:58:17.203954 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:58:17.219704 systemd[1]: Starting ensure-sysext.service... Feb 13 18:58:17.221526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:58:17.238612 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:58:17.238633 systemd[1]: Reloading... Feb 13 18:58:17.247997 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:58:17.248728 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:58:17.249550 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:58:17.249873 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 18:58:17.249992 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 18:58:17.252617 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:58:17.252726 systemd-tmpfiles[1246]: Skipping /boot Feb 13 18:58:17.260786 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:58:17.260926 systemd-tmpfiles[1246]: Skipping /boot Feb 13 18:58:17.288398 zram_generator::config[1273]: No configuration found. Feb 13 18:58:17.367680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:17.403157 systemd[1]: Reloading finished in 164 ms. Feb 13 18:58:17.416458 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:58:17.429839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:58:17.437111 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:58:17.439545 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:58:17.441869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:58:17.448048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:58:17.451800 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:58:17.454864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:58:17.457874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:17.461662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:17.464148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:17.469233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:17.470259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:17.471105 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:58:17.476072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:17.476213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:17.477957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:17.478081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:17.480884 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:17.481013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:17.488293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:17.510932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:17.513003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:17.513549 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Feb 13 18:58:17.515049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:17.515945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:17.517720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:58:17.523124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:58:17.525030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:58:17.528401 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:58:17.530012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:58:17.532297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:17.532459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:17.533768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:17.533915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:17.535843 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:17.535973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:17.549034 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:58:17.558820 systemd[1]: Finished ensure-sysext.service. Feb 13 18:58:17.561149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:58:17.568735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:58:17.574666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:58:17.579200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:58:17.581733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:58:17.582656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:58:17.585208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:58:17.588936 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 18:58:17.589907 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:58:17.590190 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:58:17.591624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:58:17.591801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:58:17.594771 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:58:17.594926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:58:17.598971 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:58:17.599161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:58:17.602986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:58:17.608030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:58:17.608185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:58:17.609784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:58:17.617772 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 18:58:17.622628 augenrules[1389]: No rules Feb 13 18:58:17.629925 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:58:17.630128 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:58:17.654398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Feb 13 18:58:17.655234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:58:17.668938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:58:17.694686 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 18:58:17.700552 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:58:17.702696 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:58:17.715554 systemd-networkd[1378]: lo: Link UP Feb 13 18:58:17.715563 systemd-networkd[1378]: lo: Gained carrier Feb 13 18:58:17.719430 systemd-networkd[1378]: Enumeration completed Feb 13 18:58:17.719600 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:58:17.733910 systemd-resolved[1312]: Positive Trust Anchors: Feb 13 18:58:17.733929 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:58:17.733961 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:58:17.737659 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:17.737663 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:58:17.738640 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:58:17.740833 systemd-networkd[1378]: eth0: Link UP Feb 13 18:58:17.740841 systemd-networkd[1378]: eth0: Gained carrier Feb 13 18:58:17.740856 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:58:17.747007 systemd-resolved[1312]: Defaulting to hostname 'linux'. Feb 13 18:58:17.755351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:58:17.756768 systemd[1]: Reached target network.target - Network. Feb 13 18:58:17.757466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:58:17.757545 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:58:17.759212 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Feb 13 18:58:17.759914 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 18:58:17.759970 systemd-timesyncd[1379]: Initial clock synchronization to Thu 2025-02-13 18:58:18.053539 UTC. Feb 13 18:58:17.772675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:58:17.777425 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:58:17.780394 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:58:17.796154 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:58:17.811776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:58:17.821900 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:58:17.823115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:58:17.824039 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:58:17.824983 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:58:17.825955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:58:17.827084 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:58:17.828119 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:58:17.829090 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:58:17.829991 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:58:17.830025 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:58:17.830850 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:58:17.832498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:58:17.834698 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:58:17.851398 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:58:17.853359 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:58:17.854612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:58:17.855488 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:58:17.856162 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:58:17.857072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:58:17.857107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:58:17.857990 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:58:17.859728 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:58:17.860895 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:58:17.861554 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:58:17.864551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:58:17.865315 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:58:17.868606 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:58:17.871426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 18:58:17.873963 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:58:17.877153 jq[1423]: false Feb 13 18:58:17.878712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:58:17.881482 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:58:17.886273 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:58:17.886720 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:58:17.887985 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:58:17.891215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:58:17.894130 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:58:17.898494 jq[1438]: true Feb 13 18:58:17.898135 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:58:17.898504 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:58:17.898831 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:58:17.899059 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:58:17.904843 extend-filesystems[1424]: Found loop3 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found loop4 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found loop5 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda1 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda2 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda3 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found usr Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda4 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda6 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda7 Feb 13 18:58:17.904843 extend-filesystems[1424]: Found vda9 Feb 13 18:58:17.904843 extend-filesystems[1424]: Checking size of /dev/vda9 Feb 13 18:58:17.903704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:58:17.911909 dbus-daemon[1422]: [system] SELinux support is enabled Feb 13 18:58:17.903864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:58:17.928712 jq[1444]: true Feb 13 18:58:17.913618 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:58:17.919486 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:58:17.922899 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:58:17.922944 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:58:17.929102 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:58:17.929136 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:58:17.937335 extend-filesystems[1424]: Resized partition /dev/vda9 Feb 13 18:58:17.940020 tar[1443]: linux-arm64/helm Feb 13 18:58:17.949775 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Feb 13 18:58:17.960588 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1342) Feb 13 18:58:17.960674 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 18:58:17.991386 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 18:58:17.994400 update_engine[1436]: I20250213 18:58:17.994241 1436 main.cc:92] Flatcar Update Engine starting Feb 13 18:58:18.012057 update_engine[1436]: I20250213 18:58:18.004774 1436 update_check_scheduler.cc:74] Next update check in 3m33s Feb 13 18:58:18.006464 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:58:18.012487 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 18:58:18.013209 systemd-logind[1432]: New seat seat0. Feb 13 18:58:18.017010 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:58:18.018365 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:58:18.021272 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 18:58:18.021272 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 18:58:18.021272 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 18:58:18.024647 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Feb 13 18:58:18.025358 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:58:18.027459 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:58:18.032686 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:58:18.035855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:58:18.038783 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 18:58:18.082055 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:58:18.226480 containerd[1445]: time="2025-02-13T18:58:18.226207840Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:58:18.260385 containerd[1445]: time="2025-02-13T18:58:18.260320284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.261960 containerd[1445]: time="2025-02-13T18:58:18.261893270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:18.261960 containerd[1445]: time="2025-02-13T18:58:18.261944774Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:58:18.261960 containerd[1445]: time="2025-02-13T18:58:18.261965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:58:18.262161 containerd[1445]: time="2025-02-13T18:58:18.262141211Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:58:18.262186 containerd[1445]: time="2025-02-13T18:58:18.262165014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262242 containerd[1445]: time="2025-02-13T18:58:18.262227424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262267 containerd[1445]: time="2025-02-13T18:58:18.262242685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262459 containerd[1445]: time="2025-02-13T18:58:18.262439246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262485 containerd[1445]: time="2025-02-13T18:58:18.262460022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262485 containerd[1445]: time="2025-02-13T18:58:18.262474329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262485 containerd[1445]: time="2025-02-13T18:58:18.262484074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262580 containerd[1445]: time="2025-02-13T18:58:18.262564813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262810 containerd[1445]: time="2025-02-13T18:58:18.262793429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262919 containerd[1445]: time="2025-02-13T18:58:18.262901662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:58:18.262961 containerd[1445]: time="2025-02-13T18:58:18.262918996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:58:18.263028 containerd[1445]: time="2025-02-13T18:58:18.263010932Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:58:18.263079 containerd[1445]: time="2025-02-13T18:58:18.263065629Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:58:18.266451 containerd[1445]: time="2025-02-13T18:58:18.266424130Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:58:18.267277 containerd[1445]: time="2025-02-13T18:58:18.266554548Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:58:18.267277 containerd[1445]: time="2025-02-13T18:58:18.266739623Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:58:18.267277 containerd[1445]: time="2025-02-13T18:58:18.266765292Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:58:18.267475 containerd[1445]: time="2025-02-13T18:58:18.266786358Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:58:18.267889 containerd[1445]: time="2025-02-13T18:58:18.267858446Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:58:18.268184 containerd[1445]: time="2025-02-13T18:58:18.268162784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:58:18.268292 containerd[1445]: time="2025-02-13T18:58:18.268274293Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:58:18.268346 containerd[1445]: time="2025-02-13T18:58:18.268297059Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:58:18.268346 containerd[1445]: time="2025-02-13T18:58:18.268313398Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:58:18.268346 containerd[1445]: time="2025-02-13T18:58:18.268329944Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268346 containerd[1445]: time="2025-02-13T18:58:18.268344914Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268434 containerd[1445]: time="2025-02-13T18:58:18.268358391Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268434 containerd[1445]: time="2025-02-13T18:58:18.268372034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268434 containerd[1445]: time="2025-02-13T18:58:18.268386714Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268434 containerd[1445]: time="2025-02-13T18:58:18.268400357Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268434 containerd[1445]: time="2025-02-13T18:58:18.268431749Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268516 containerd[1445]: time="2025-02-13T18:58:18.268445268Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:58:18.268516 containerd[1445]: time="2025-02-13T18:58:18.268466292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268516 containerd[1445]: time="2025-02-13T18:58:18.268481221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268516 containerd[1445]: time="2025-02-13T18:58:18.268494118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268516 containerd[1445]: time="2025-02-13T18:58:18.268506766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268519040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268532145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268543673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268557067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268569798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268583358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268602 containerd[1445]: time="2025-02-13T18:58:18.268594513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268606042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268618648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268633452Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268660697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268673843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268715 containerd[1445]: time="2025-02-13T18:58:18.268685454Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:58:18.268875 containerd[1445]: time="2025-02-13T18:58:18.268862815Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268883674Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268895575Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268907394Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268916434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268928128Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268937625Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:58:18.268991 containerd[1445]: time="2025-02-13T18:58:18.268947370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:58:18.269288 containerd[1445]: time="2025-02-13T18:58:18.269233544Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:58:18.269416 containerd[1445]: time="2025-02-13T18:58:18.269289651Z" level=info msg="Connect containerd service" Feb 13 18:58:18.269416 containerd[1445]: time="2025-02-13T18:58:18.269328715Z" level=info msg="using legacy CRI server" Feb 13 18:58:18.269416 containerd[1445]: time="2025-02-13T18:58:18.269335972Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:58:18.270692 containerd[1445]: time="2025-02-13T18:58:18.270261427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:58:18.270993 containerd[1445]: time="2025-02-13T18:58:18.270964568Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271528168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271576935Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271683468Z" level=info msg="Start subscribing containerd event" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271715979Z" level=info msg="Start recovering state" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271775653Z" level=info msg="Start event monitor" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271785688Z" level=info msg="Start snapshots syncer" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271794438Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271802649Z" level=info msg="Start streaming server" Feb 13 18:58:18.272368 containerd[1445]: time="2025-02-13T18:58:18.271922244Z" level=info msg="containerd successfully booted in 0.048937s" Feb 13 18:58:18.272014 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:58:18.340900 tar[1443]: linux-arm64/LICENSE Feb 13 18:58:18.340900 tar[1443]: linux-arm64/README.md Feb 13 18:58:18.352694 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 18:58:18.437320 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:58:18.456674 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:58:18.466712 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:58:18.472152 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:58:18.472371 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:58:18.474823 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:58:18.488724 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:58:18.497730 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:58:18.499658 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 18:58:18.500659 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:58:19.596979 systemd-networkd[1378]: eth0: Gained IPv6LL Feb 13 18:58:19.604093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:58:19.605652 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:58:19.618708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 18:58:19.621124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:19.623138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:58:19.637954 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 18:58:19.638166 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 18:58:19.640356 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:58:19.643724 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:58:20.123966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:20.125237 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:58:20.127569 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:58:20.127709 systemd[1]: Startup finished in 598ms (kernel) + 5.447s (initrd) + 3.990s (userspace) = 10.036s. Feb 13 18:58:20.145816 agetty[1511]: failed to open credentials directory Feb 13 18:58:20.145907 agetty[1512]: failed to open credentials directory Feb 13 18:58:20.559529 kubelet[1535]: E0213 18:58:20.559368 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:58:20.562142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:58:20.562289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:58:23.386540 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:58:23.388106 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:38194.service - OpenSSH per-connection server daemon (10.0.0.1:38194). Feb 13 18:58:23.474020 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 38194 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:23.477879 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:23.487664 systemd-logind[1432]: New session 1 of user core. Feb 13 18:58:23.488839 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:58:23.498628 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:58:23.508927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:58:23.511267 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:58:23.517808 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:58:23.599693 systemd[1552]: Queued start job for default target default.target. Feb 13 18:58:23.608548 systemd[1552]: Created slice app.slice - User Application Slice. Feb 13 18:58:23.608599 systemd[1552]: Reached target paths.target - Paths. Feb 13 18:58:23.608613 systemd[1552]: Reached target timers.target - Timers. Feb 13 18:58:23.610082 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:58:23.620477 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:58:23.620642 systemd[1552]: Reached target sockets.target - Sockets. Feb 13 18:58:23.620662 systemd[1552]: Reached target basic.target - Basic System. Feb 13 18:58:23.620703 systemd[1552]: Reached target default.target - Main User Target. Feb 13 18:58:23.620730 systemd[1552]: Startup finished in 97ms. Feb 13 18:58:23.620915 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:58:23.631568 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:58:23.691704 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:38196.service - OpenSSH per-connection server daemon (10.0.0.1:38196). Feb 13 18:58:23.739917 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 38196 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:23.741223 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:23.745731 systemd-logind[1432]: New session 2 of user core. Feb 13 18:58:23.751565 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:58:23.804054 sshd[1565]: Connection closed by 10.0.0.1 port 38196 Feb 13 18:58:23.804496 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:23.820888 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:38196.service: Deactivated successfully. Feb 13 18:58:23.822743 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 18:58:23.824492 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Feb 13 18:58:23.835702 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:38210.service - OpenSSH per-connection server daemon (10.0.0.1:38210). Feb 13 18:58:23.836915 systemd-logind[1432]: Removed session 2. Feb 13 18:58:23.877903 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 38210 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:23.879216 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:23.882977 systemd-logind[1432]: New session 3 of user core. Feb 13 18:58:23.894562 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:58:23.943577 sshd[1572]: Connection closed by 10.0.0.1 port 38210 Feb 13 18:58:23.943824 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:23.959359 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:38210.service: Deactivated successfully. Feb 13 18:58:23.960923 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 18:58:23.962243 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Feb 13 18:58:23.963948 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:38214.service - OpenSSH per-connection server daemon (10.0.0.1:38214). Feb 13 18:58:23.964946 systemd-logind[1432]: Removed session 3. Feb 13 18:58:24.012954 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:24.014263 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:24.018448 systemd-logind[1432]: New session 4 of user core. Feb 13 18:58:24.028579 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:58:24.081082 sshd[1579]: Connection closed by 10.0.0.1 port 38214 Feb 13 18:58:24.081464 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:24.094116 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:38214.service: Deactivated successfully. Feb 13 18:58:24.095627 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:58:24.097189 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:58:24.099130 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:38230.service - OpenSSH per-connection server daemon (10.0.0.1:38230). Feb 13 18:58:24.100215 systemd-logind[1432]: Removed session 4. Feb 13 18:58:24.145223 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 38230 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:24.146589 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:24.150991 systemd-logind[1432]: New session 5 of user core. Feb 13 18:58:24.167572 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:58:24.228904 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 18:58:24.229210 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:58:24.252234 sudo[1587]: pam_unix(sudo:session): session closed for user root Feb 13 18:58:24.253792 sshd[1586]: Connection closed by 10.0.0.1 port 38230 Feb 13 18:58:24.254363 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:24.270487 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:38230.service: Deactivated successfully. Feb 13 18:58:24.272433 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:58:24.274075 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:58:24.285768 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:38236.service - OpenSSH per-connection server daemon (10.0.0.1:38236). Feb 13 18:58:24.286654 systemd-logind[1432]: Removed session 5. Feb 13 18:58:24.328920 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:24.330299 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:24.334196 systemd-logind[1432]: New session 6 of user core. Feb 13 18:58:24.342549 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:58:24.394764 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 18:58:24.395059 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:58:24.398390 sudo[1596]: pam_unix(sudo:session): session closed for user root Feb 13 18:58:24.403717 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 18:58:24.404022 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:58:24.423994 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:58:24.448399 augenrules[1618]: No rules Feb 13 18:58:24.449854 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:58:24.450049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:58:24.451200 sudo[1595]: pam_unix(sudo:session): session closed for user root Feb 13 18:58:24.452716 sshd[1594]: Connection closed by 10.0.0.1 port 38236 Feb 13 18:58:24.453303 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:24.459940 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:38236.service: Deactivated successfully. Feb 13 18:58:24.461640 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:58:24.463053 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:58:24.464408 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:38250.service - OpenSSH per-connection server daemon (10.0.0.1:38250). Feb 13 18:58:24.465297 systemd-logind[1432]: Removed session 6. Feb 13 18:58:24.512015 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 38250 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:58:24.513164 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:58:24.517163 systemd-logind[1432]: New session 7 of user core. Feb 13 18:58:24.526567 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:58:24.579142 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 18:58:24.579454 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:58:24.971697 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 18:58:24.971871 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 18:58:25.243335 dockerd[1649]: time="2025-02-13T18:58:25.243203455Z" level=info msg="Starting up" Feb 13 18:58:25.403870 dockerd[1649]: time="2025-02-13T18:58:25.403821546Z" level=info msg="Loading containers: start." Feb 13 18:58:25.572405 kernel: Initializing XFRM netlink socket Feb 13 18:58:25.639908 systemd-networkd[1378]: docker0: Link UP Feb 13 18:58:25.675850 dockerd[1649]: time="2025-02-13T18:58:25.675806078Z" level=info msg="Loading containers: done." Feb 13 18:58:25.701812 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1823028311-merged.mount: Deactivated successfully. Feb 13 18:58:25.706893 dockerd[1649]: time="2025-02-13T18:58:25.706836430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 18:58:25.706993 dockerd[1649]: time="2025-02-13T18:58:25.706949882Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 18:58:25.707391 dockerd[1649]: time="2025-02-13T18:58:25.707353296Z" level=info msg="Daemon has completed initialization" Feb 13 18:58:25.739072 dockerd[1649]: time="2025-02-13T18:58:25.739004349Z" level=info msg="API listen on /run/docker.sock" Feb 13 18:58:25.739273 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 18:58:26.593120 containerd[1445]: time="2025-02-13T18:58:26.593003395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 18:58:27.245025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435723607.mount: Deactivated successfully. Feb 13 18:58:29.271210 containerd[1445]: time="2025-02-13T18:58:29.271151446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:29.273079 containerd[1445]: time="2025-02-13T18:58:29.273034709Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 18:58:29.274094 containerd[1445]: time="2025-02-13T18:58:29.274054052Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:29.276468 containerd[1445]: time="2025-02-13T18:58:29.276420766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:29.277759 containerd[1445]: time="2025-02-13T18:58:29.277639984Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.684589474s" Feb 13 18:58:29.277759 containerd[1445]: time="2025-02-13T18:58:29.277671165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 18:58:29.278462 containerd[1445]: time="2025-02-13T18:58:29.278271759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 18:58:30.812971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 18:58:30.823558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:30.915465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:30.919442 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:58:30.958569 kubelet[1908]: E0213 18:58:30.958503 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:58:30.961643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:58:30.961785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:58:31.301533 containerd[1445]: time="2025-02-13T18:58:31.301406951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:31.301966 containerd[1445]: time="2025-02-13T18:58:31.301922062Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 18:58:31.302934 containerd[1445]: time="2025-02-13T18:58:31.302879216Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:31.305782 containerd[1445]: time="2025-02-13T18:58:31.305736868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:31.306951 containerd[1445]: time="2025-02-13T18:58:31.306902280Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.028598431s" Feb 13 18:58:31.306951 containerd[1445]: time="2025-02-13T18:58:31.306941492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 18:58:31.307788 containerd[1445]: time="2025-02-13T18:58:31.307766959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 18:58:32.489601 containerd[1445]: time="2025-02-13T18:58:32.489553995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:32.490520 containerd[1445]: time="2025-02-13T18:58:32.490291869Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 18:58:32.491178 containerd[1445]: time="2025-02-13T18:58:32.491134976Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:32.494846 containerd[1445]: time="2025-02-13T18:58:32.494798280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:32.495815 containerd[1445]: time="2025-02-13T18:58:32.495782501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.18789288s" Feb 13 18:58:32.495815 containerd[1445]: time="2025-02-13T18:58:32.495842680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 18:58:32.496283 containerd[1445]: time="2025-02-13T18:58:32.496249289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 18:58:33.789905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564223850.mount: Deactivated successfully. Feb 13 18:58:34.168618 containerd[1445]: time="2025-02-13T18:58:34.168492371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:34.169245 containerd[1445]: time="2025-02-13T18:58:34.169202556Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 18:58:34.169809 containerd[1445]: time="2025-02-13T18:58:34.169777156Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:34.171757 containerd[1445]: time="2025-02-13T18:58:34.171724119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:34.172496 containerd[1445]: time="2025-02-13T18:58:34.172465720Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.676173359s" Feb 13 18:58:34.172546 containerd[1445]: time="2025-02-13T18:58:34.172497256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 18:58:34.173059 containerd[1445]: time="2025-02-13T18:58:34.172894451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 18:58:34.844832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660758221.mount: Deactivated successfully. Feb 13 18:58:35.643101 containerd[1445]: time="2025-02-13T18:58:35.643047748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:35.643636 containerd[1445]: time="2025-02-13T18:58:35.643583613Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 18:58:35.644511 containerd[1445]: time="2025-02-13T18:58:35.644458438Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:35.647270 containerd[1445]: time="2025-02-13T18:58:35.647236616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:35.648561 containerd[1445]: time="2025-02-13T18:58:35.648507860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.475578142s" Feb 13 18:58:35.648561 containerd[1445]: time="2025-02-13T18:58:35.648546245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 18:58:35.649198 containerd[1445]: time="2025-02-13T18:58:35.648996024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 18:58:36.215491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402601565.mount: Deactivated successfully. Feb 13 18:58:36.219974 containerd[1445]: time="2025-02-13T18:58:36.219912421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:36.221472 containerd[1445]: time="2025-02-13T18:58:36.221422095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 18:58:36.222396 containerd[1445]: time="2025-02-13T18:58:36.222340332Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:36.224300 containerd[1445]: time="2025-02-13T18:58:36.224249206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:36.225267 containerd[1445]: time="2025-02-13T18:58:36.225227563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 576.199341ms" Feb 13 18:58:36.225267 containerd[1445]: time="2025-02-13T18:58:36.225262478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 18:58:36.226148 containerd[1445]: time="2025-02-13T18:58:36.226065334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 18:58:36.952444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369104578.mount: Deactivated successfully. Feb 13 18:58:38.763488 containerd[1445]: time="2025-02-13T18:58:38.762362923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:38.793205 containerd[1445]: time="2025-02-13T18:58:38.793073922Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 18:58:38.816976 containerd[1445]: time="2025-02-13T18:58:38.816934077Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:38.833729 containerd[1445]: time="2025-02-13T18:58:38.833656180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:58:38.835016 containerd[1445]: time="2025-02-13T18:58:38.834883972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.608789348s" Feb 13 18:58:38.835016 containerd[1445]: time="2025-02-13T18:58:38.834916013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 18:58:41.212109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 18:58:41.222614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:41.370531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:41.394911 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:58:41.441580 kubelet[2063]: E0213 18:58:41.441535 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:58:41.443723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:58:41.443868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:58:44.880984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:44.901662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:44.924280 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Feb 13 18:58:44.924296 systemd[1]: Reloading... Feb 13 18:58:44.993484 zram_generator::config[2119]: No configuration found. Feb 13 18:58:45.109744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:45.163202 systemd[1]: Reloading finished in 238 ms. Feb 13 18:58:45.204084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:45.208147 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 18:58:45.208540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:45.210361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:45.311762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:45.316813 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:58:45.351694 kubelet[2166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:58:45.351694 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 18:58:45.351694 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:58:45.352028 kubelet[2166]: I0213 18:58:45.351810 2166 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:58:46.098793 kubelet[2166]: I0213 18:58:46.098619 2166 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 18:58:46.098793 kubelet[2166]: I0213 18:58:46.098652 2166 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:58:46.098947 kubelet[2166]: I0213 18:58:46.098939 2166 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 18:58:46.125703 kubelet[2166]: E0213 18:58:46.125651 2166 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:46.126386 kubelet[2166]: I0213 18:58:46.126345 2166 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:58:46.133840 kubelet[2166]: E0213 18:58:46.133787 2166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:58:46.133840 kubelet[2166]: I0213 18:58:46.133828 2166 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:58:46.138112 kubelet[2166]: I0213 18:58:46.138082 2166 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:58:46.138946 kubelet[2166]: I0213 18:58:46.138909 2166 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 18:58:46.139114 kubelet[2166]: I0213 18:58:46.139075 2166 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:58:46.139316 kubelet[2166]: I0213 18:58:46.139107 2166 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:58:46.139541 kubelet[2166]: I0213 18:58:46.139457 2166 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:58:46.139541 kubelet[2166]: I0213 18:58:46.139470 2166 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 18:58:46.139688 kubelet[2166]: I0213 18:58:46.139662 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:58:46.141395 kubelet[2166]: I0213 18:58:46.141374 2166 kubelet.go:408] "Attempting to sync node with API server" Feb 13 18:58:46.141446 kubelet[2166]: I0213 18:58:46.141400 2166 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:58:46.141503 kubelet[2166]: I0213 18:58:46.141493 2166 kubelet.go:314] "Adding apiserver pod source" Feb 13 18:58:46.141534 kubelet[2166]: I0213 18:58:46.141508 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:58:46.144761 kubelet[2166]: W0213 18:58:46.144640 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:46.144761 kubelet[2166]: E0213 18:58:46.144707 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:46.145420 kubelet[2166]: W0213 18:58:46.145143 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:46.145420 kubelet[2166]: E0213 18:58:46.145199 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:46.147037 kubelet[2166]: I0213 18:58:46.146859 2166 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:58:46.149080 kubelet[2166]: I0213 18:58:46.149054 2166 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:58:46.152027 kubelet[2166]: W0213 18:58:46.151970 2166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 18:58:46.153132 kubelet[2166]: I0213 18:58:46.152869 2166 server.go:1269] "Started kubelet" Feb 13 18:58:46.154249 kubelet[2166]: I0213 18:58:46.154219 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:58:46.159554 kubelet[2166]: I0213 18:58:46.159129 2166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:58:46.159554 kubelet[2166]: I0213 18:58:46.159508 2166 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:58:46.159782 kubelet[2166]: I0213 18:58:46.159735 2166 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:58:46.160554 kubelet[2166]: I0213 18:58:46.159955 2166 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 18:58:46.160554 kubelet[2166]: I0213 18:58:46.160090 2166 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 18:58:46.160554 kubelet[2166]: I0213 18:58:46.160154 2166 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:58:46.160692 kubelet[2166]: W0213 18:58:46.160637 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:46.160918 kubelet[2166]: E0213 18:58:46.160691 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:46.161339 kubelet[2166]: I0213 18:58:46.161311 2166 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:58:46.162276 kubelet[2166]: I0213 18:58:46.161412 2166 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:58:46.162276 kubelet[2166]: E0213 18:58:46.159726 2166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d99794ada203 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:58:46.152839683 +0000 UTC m=+0.832872468,LastTimestamp:2025-02-13 18:58:46.152839683 +0000 UTC m=+0.832872468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:58:46.162276 kubelet[2166]: E0213 18:58:46.161364 2166 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:58:46.162276 kubelet[2166]: I0213 18:58:46.161532 2166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:58:46.162630 kubelet[2166]: E0213 18:58:46.162421 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Feb 13 18:58:46.162920 kubelet[2166]: E0213 18:58:46.162893 2166 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:58:46.163389 kubelet[2166]: I0213 18:58:46.163307 2166 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:58:46.163810 kubelet[2166]: I0213 18:58:46.163787 2166 server.go:460] "Adding debug handlers to kubelet server" Feb 13 18:58:46.174258 kubelet[2166]: I0213 18:58:46.174107 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:58:46.175302 kubelet[2166]: I0213 18:58:46.175277 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:58:46.175750 kubelet[2166]: I0213 18:58:46.175399 2166 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 18:58:46.175750 kubelet[2166]: I0213 18:58:46.175420 2166 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 18:58:46.175750 kubelet[2166]: E0213 18:58:46.175479 2166 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:58:46.178093 kubelet[2166]: I0213 18:58:46.178069 2166 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 18:58:46.178206 kubelet[2166]: I0213 18:58:46.178194 2166 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 18:58:46.178268 kubelet[2166]: I0213 18:58:46.178260 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:58:46.178355 kubelet[2166]: W0213 18:58:46.178064 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:46.178428 kubelet[2166]: E0213 18:58:46.178393 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:46.241722 kubelet[2166]: I0213 18:58:46.241686 2166 policy_none.go:49] "None policy: Start" Feb 13 18:58:46.242907 kubelet[2166]: I0213 18:58:46.242830 2166 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 18:58:46.242907 kubelet[2166]: I0213 18:58:46.242890 2166 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:58:46.248715 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 18:58:46.262334 kubelet[2166]: E0213 18:58:46.262299 2166 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:58:46.269196 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 18:58:46.271727 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 18:58:46.275768 kubelet[2166]: E0213 18:58:46.275683 2166 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 18:58:46.284304 kubelet[2166]: I0213 18:58:46.284205 2166 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:58:46.284476 kubelet[2166]: I0213 18:58:46.284458 2166 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:58:46.284539 kubelet[2166]: I0213 18:58:46.284478 2166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:58:46.285594 kubelet[2166]: I0213 18:58:46.285212 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:58:46.286305 kubelet[2166]: E0213 18:58:46.286282 2166 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 18:58:46.363653 kubelet[2166]: E0213 18:58:46.363509 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Feb 13 18:58:46.386885 kubelet[2166]: I0213 18:58:46.386819 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:46.387392 kubelet[2166]: E0213 18:58:46.387338 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Feb 13 18:58:46.485440 systemd[1]: Created slice kubepods-burstable-pod975d5c2d2ab8b6d92464acaac20732ca.slice - libcontainer container kubepods-burstable-pod975d5c2d2ab8b6d92464acaac20732ca.slice. Feb 13 18:58:46.497104 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 18:58:46.513702 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 18:58:46.563197 kubelet[2166]: I0213 18:58:46.563164 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:46.563411 kubelet[2166]: I0213 18:58:46.563392 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:46.563489 kubelet[2166]: I0213 18:58:46.563475 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:46.563554 kubelet[2166]: I0213 18:58:46.563543 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:46.563614 kubelet[2166]: I0213 18:58:46.563604 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:46.563701 kubelet[2166]: I0213 18:58:46.563686 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:46.563857 kubelet[2166]: I0213 18:58:46.563757 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:58:46.563857 kubelet[2166]: I0213 18:58:46.563777 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:46.563857 kubelet[2166]: I0213 18:58:46.563793 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:46.588976 kubelet[2166]: I0213 18:58:46.588551 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:46.588976 kubelet[2166]: E0213 18:58:46.588890 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Feb 13 18:58:46.764757 kubelet[2166]: E0213 18:58:46.764624 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Feb 13 18:58:46.795273 kubelet[2166]: E0213 18:58:46.795242 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:46.796084 containerd[1445]: time="2025-02-13T18:58:46.796003426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:975d5c2d2ab8b6d92464acaac20732ca,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:46.811561 kubelet[2166]: E0213 18:58:46.811253 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:46.812174 containerd[1445]: time="2025-02-13T18:58:46.811900054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:46.816513 kubelet[2166]: E0213 18:58:46.816481 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:46.816961 containerd[1445]: time="2025-02-13T18:58:46.816911505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:46.991267 kubelet[2166]: I0213 18:58:46.990896 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:46.991586 kubelet[2166]: E0213 18:58:46.991548 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Feb 13 18:58:47.121676 kubelet[2166]: W0213 18:58:47.121529 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:47.121676 kubelet[2166]: E0213 18:58:47.121603 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:47.321622 kubelet[2166]: W0213 18:58:47.321555 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:47.321622 kubelet[2166]: E0213 18:58:47.321625 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:47.375489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255700209.mount: Deactivated successfully. Feb 13 18:58:47.383621 containerd[1445]: time="2025-02-13T18:58:47.383570718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:58:47.384870 containerd[1445]: time="2025-02-13T18:58:47.384829399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 18:58:47.388125 containerd[1445]: time="2025-02-13T18:58:47.388079680Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:58:47.391216 containerd[1445]: time="2025-02-13T18:58:47.391174362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:58:47.392444 containerd[1445]: time="2025-02-13T18:58:47.392384566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:58:47.394427 containerd[1445]: time="2025-02-13T18:58:47.394395061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:58:47.394525 containerd[1445]: time="2025-02-13T18:58:47.394493576Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:58:47.395304 containerd[1445]: time="2025-02-13T18:58:47.395264485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:58:47.396683 containerd[1445]: time="2025-02-13T18:58:47.396632649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.550035ms" Feb 13 18:58:47.400785 containerd[1445]: time="2025-02-13T18:58:47.400725854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.738804ms" Feb 13 18:58:47.401321 containerd[1445]: time="2025-02-13T18:58:47.401276234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.298874ms" Feb 13 18:58:47.561820 containerd[1445]: time="2025-02-13T18:58:47.561583851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:47.562131 containerd[1445]: time="2025-02-13T18:58:47.562057573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:47.562131 containerd[1445]: time="2025-02-13T18:58:47.562112935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:47.562246 containerd[1445]: time="2025-02-13T18:58:47.562131469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.562438 containerd[1445]: time="2025-02-13T18:58:47.562337987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:47.562500 containerd[1445]: time="2025-02-13T18:58:47.562466925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:47.562537 containerd[1445]: time="2025-02-13T18:58:47.562513161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.565173 kubelet[2166]: E0213 18:58:47.565106 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="1.6s" Feb 13 18:58:47.569391 containerd[1445]: time="2025-02-13T18:58:47.569304265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.569569 containerd[1445]: time="2025-02-13T18:58:47.569516307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.570162 containerd[1445]: time="2025-02-13T18:58:47.570108839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:47.570162 containerd[1445]: time="2025-02-13T18:58:47.570140903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.570545 containerd[1445]: time="2025-02-13T18:58:47.570455464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:47.587120 kubelet[2166]: W0213 18:58:47.587056 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:47.587236 kubelet[2166]: E0213 18:58:47.587129 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:47.600617 systemd[1]: Started cri-containerd-012a1475fccea6f9cf0d62b0709a92a450bbc257c09c0bf61217b854b8a8bde8.scope - libcontainer container 012a1475fccea6f9cf0d62b0709a92a450bbc257c09c0bf61217b854b8a8bde8. Feb 13 18:58:47.602163 systemd[1]: Started cri-containerd-5839031685ff2c3b8aa0d75ac2728398b136e82a8cceeb94c8e994fac63b3a57.scope - libcontainer container 5839031685ff2c3b8aa0d75ac2728398b136e82a8cceeb94c8e994fac63b3a57. Feb 13 18:58:47.603556 systemd[1]: Started cri-containerd-ceee0fadf94e30353ee2d45910c74850e86eb2ea3b625b08ebcaaa5c31292f97.scope - libcontainer container ceee0fadf94e30353ee2d45910c74850e86eb2ea3b625b08ebcaaa5c31292f97. Feb 13 18:58:47.618491 kubelet[2166]: W0213 18:58:47.618118 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Feb 13 18:58:47.618491 kubelet[2166]: E0213 18:58:47.618236 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:58:47.639315 containerd[1445]: time="2025-02-13T18:58:47.639030453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:975d5c2d2ab8b6d92464acaac20732ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"012a1475fccea6f9cf0d62b0709a92a450bbc257c09c0bf61217b854b8a8bde8\"" Feb 13 18:58:47.644699 containerd[1445]: time="2025-02-13T18:58:47.641013367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceee0fadf94e30353ee2d45910c74850e86eb2ea3b625b08ebcaaa5c31292f97\"" Feb 13 18:58:47.644823 kubelet[2166]: E0213 18:58:47.644102 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:47.645325 kubelet[2166]: E0213 18:58:47.645303 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:47.646145 containerd[1445]: time="2025-02-13T18:58:47.646070747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5839031685ff2c3b8aa0d75ac2728398b136e82a8cceeb94c8e994fac63b3a57\"" Feb 13 18:58:47.646810 kubelet[2166]: E0213 18:58:47.646775 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:47.648303 containerd[1445]: time="2025-02-13T18:58:47.648239163Z" level=info msg="CreateContainer within sandbox \"012a1475fccea6f9cf0d62b0709a92a450bbc257c09c0bf61217b854b8a8bde8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 18:58:47.648552 containerd[1445]: time="2025-02-13T18:58:47.648527543Z" level=info msg="CreateContainer within sandbox \"ceee0fadf94e30353ee2d45910c74850e86eb2ea3b625b08ebcaaa5c31292f97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 18:58:47.649346 containerd[1445]: time="2025-02-13T18:58:47.649319187Z" level=info msg="CreateContainer within sandbox \"5839031685ff2c3b8aa0d75ac2728398b136e82a8cceeb94c8e994fac63b3a57\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 18:58:47.676550 containerd[1445]: time="2025-02-13T18:58:47.676472676Z" level=info msg="CreateContainer within sandbox \"012a1475fccea6f9cf0d62b0709a92a450bbc257c09c0bf61217b854b8a8bde8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ad3fd789c6e83d87b024e4acc79336654500177a72d3bba63cd59adf9b1d247\"" Feb 13 18:58:47.677286 containerd[1445]: time="2025-02-13T18:58:47.677251630Z" level=info msg="StartContainer for \"9ad3fd789c6e83d87b024e4acc79336654500177a72d3bba63cd59adf9b1d247\"" Feb 13 18:58:47.677940 containerd[1445]: time="2025-02-13T18:58:47.677867501Z" level=info msg="CreateContainer within sandbox \"5839031685ff2c3b8aa0d75ac2728398b136e82a8cceeb94c8e994fac63b3a57\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0f04472aad85a95561d6d5a141b6f1e31e8d4924c4102ebe58cbb6e78fff65f\"" Feb 13 18:58:47.678283 containerd[1445]: time="2025-02-13T18:58:47.678261281Z" level=info msg="StartContainer for \"d0f04472aad85a95561d6d5a141b6f1e31e8d4924c4102ebe58cbb6e78fff65f\"" Feb 13 18:58:47.684298 containerd[1445]: time="2025-02-13T18:58:47.684155461Z" level=info msg="CreateContainer within sandbox \"ceee0fadf94e30353ee2d45910c74850e86eb2ea3b625b08ebcaaa5c31292f97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd3954fa48b8dd397cbba0ab270a3dc2e0af33bba467cbc702044ce9884c475c\"" Feb 13 18:58:47.684851 containerd[1445]: time="2025-02-13T18:58:47.684826693Z" level=info msg="StartContainer for \"fd3954fa48b8dd397cbba0ab270a3dc2e0af33bba467cbc702044ce9884c475c\"" Feb 13 18:58:47.702630 systemd[1]: Started cri-containerd-9ad3fd789c6e83d87b024e4acc79336654500177a72d3bba63cd59adf9b1d247.scope - libcontainer container 9ad3fd789c6e83d87b024e4acc79336654500177a72d3bba63cd59adf9b1d247. Feb 13 18:58:47.706103 systemd[1]: Started cri-containerd-d0f04472aad85a95561d6d5a141b6f1e31e8d4924c4102ebe58cbb6e78fff65f.scope - libcontainer container d0f04472aad85a95561d6d5a141b6f1e31e8d4924c4102ebe58cbb6e78fff65f. Feb 13 18:58:47.712618 systemd[1]: Started cri-containerd-fd3954fa48b8dd397cbba0ab270a3dc2e0af33bba467cbc702044ce9884c475c.scope - libcontainer container fd3954fa48b8dd397cbba0ab270a3dc2e0af33bba467cbc702044ce9884c475c. Feb 13 18:58:47.743496 containerd[1445]: time="2025-02-13T18:58:47.743452367Z" level=info msg="StartContainer for \"9ad3fd789c6e83d87b024e4acc79336654500177a72d3bba63cd59adf9b1d247\" returns successfully" Feb 13 18:58:47.769439 containerd[1445]: time="2025-02-13T18:58:47.769120682Z" level=info msg="StartContainer for \"d0f04472aad85a95561d6d5a141b6f1e31e8d4924c4102ebe58cbb6e78fff65f\" returns successfully" Feb 13 18:58:47.769439 containerd[1445]: time="2025-02-13T18:58:47.769214874Z" level=info msg="StartContainer for \"fd3954fa48b8dd397cbba0ab270a3dc2e0af33bba467cbc702044ce9884c475c\" returns successfully" Feb 13 18:58:47.793788 kubelet[2166]: I0213 18:58:47.793750 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:47.794173 kubelet[2166]: E0213 18:58:47.794137 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Feb 13 18:58:48.184987 kubelet[2166]: E0213 18:58:48.184946 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:48.189978 kubelet[2166]: E0213 18:58:48.189952 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:48.191535 kubelet[2166]: E0213 18:58:48.191512 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:49.193772 kubelet[2166]: E0213 18:58:49.193547 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:49.193772 kubelet[2166]: E0213 18:58:49.193619 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:49.396050 kubelet[2166]: I0213 18:58:49.395763 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:49.621755 kubelet[2166]: E0213 18:58:49.621651 2166 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 18:58:49.791635 kubelet[2166]: I0213 18:58:49.791483 2166 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 18:58:50.144000 kubelet[2166]: I0213 18:58:50.143958 2166 apiserver.go:52] "Watching apiserver" Feb 13 18:58:50.160819 kubelet[2166]: I0213 18:58:50.160751 2166 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 18:58:50.669306 kubelet[2166]: E0213 18:58:50.669167 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:51.196120 kubelet[2166]: E0213 18:58:51.196076 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:51.791617 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Feb 13 18:58:51.791635 systemd[1]: Reloading... Feb 13 18:58:51.854414 zram_generator::config[2488]: No configuration found. Feb 13 18:58:51.943676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:58:52.007594 systemd[1]: Reloading finished in 215 ms. Feb 13 18:58:52.038959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:52.057774 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 18:58:52.058110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:52.058162 systemd[1]: kubelet.service: Consumed 1.248s CPU time, 118.0M memory peak, 0B memory swap peak. Feb 13 18:58:52.068757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:58:52.161078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:58:52.165731 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:58:52.204545 kubelet[2529]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:58:52.204545 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 18:58:52.204545 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:58:52.204545 kubelet[2529]: I0213 18:58:52.203476 2529 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:58:52.209019 kubelet[2529]: I0213 18:58:52.208983 2529 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 18:58:52.209019 kubelet[2529]: I0213 18:58:52.209009 2529 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:58:52.209318 kubelet[2529]: I0213 18:58:52.209235 2529 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 18:58:52.211692 kubelet[2529]: I0213 18:58:52.210660 2529 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 18:58:52.212580 kubelet[2529]: I0213 18:58:52.212554 2529 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:58:52.215222 kubelet[2529]: E0213 18:58:52.215194 2529 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:58:52.215222 kubelet[2529]: I0213 18:58:52.215221 2529 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:58:52.218539 kubelet[2529]: I0213 18:58:52.217992 2529 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:58:52.218539 kubelet[2529]: I0213 18:58:52.218429 2529 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 18:58:52.220393 kubelet[2529]: I0213 18:58:52.218740 2529 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:58:52.220393 kubelet[2529]: I0213 18:58:52.218875 2529 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:58:52.220393 kubelet[2529]: I0213 18:58:52.219713 2529 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:58:52.220393 kubelet[2529]: I0213 18:58:52.219731 2529 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 18:58:52.220818 kubelet[2529]: I0213 18:58:52.219789 2529 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:58:52.220818 kubelet[2529]: I0213 18:58:52.219925 2529 kubelet.go:408] "Attempting to sync node with API server" Feb 13 18:58:52.220818 kubelet[2529]: I0213 18:58:52.219955 2529 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:58:52.220818 kubelet[2529]: I0213 18:58:52.219978 2529 kubelet.go:314] "Adding apiserver pod source" Feb 13 18:58:52.220818 kubelet[2529]: I0213 18:58:52.219987 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:58:52.221526 kubelet[2529]: I0213 18:58:52.221433 2529 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:58:52.221922 kubelet[2529]: I0213 18:58:52.221891 2529 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.222260 2529 server.go:1269] "Started kubelet" Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.223090 2529 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.223348 2529 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.223430 2529 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.223987 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:58:52.224392 kubelet[2529]: I0213 18:58:52.224270 2529 server.go:460] "Adding debug handlers to kubelet server" Feb 13 18:58:52.225934 kubelet[2529]: I0213 18:58:52.225796 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:58:52.226891 kubelet[2529]: I0213 18:58:52.226855 2529 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 18:58:52.227016 kubelet[2529]: I0213 18:58:52.227001 2529 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 18:58:52.227195 kubelet[2529]: I0213 18:58:52.227181 2529 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:58:52.228068 kubelet[2529]: E0213 18:58:52.228000 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:58:52.229307 kubelet[2529]: I0213 18:58:52.229271 2529 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:58:52.233963 kubelet[2529]: E0213 18:58:52.233937 2529 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:58:52.243399 kubelet[2529]: I0213 18:58:52.242610 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:58:52.245861 kubelet[2529]: I0213 18:58:52.245814 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:58:52.245861 kubelet[2529]: I0213 18:58:52.245861 2529 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 18:58:52.245991 kubelet[2529]: I0213 18:58:52.245879 2529 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 18:58:52.245991 kubelet[2529]: E0213 18:58:52.245950 2529 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:58:52.250562 kubelet[2529]: I0213 18:58:52.250517 2529 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:58:52.250949 kubelet[2529]: I0213 18:58:52.250934 2529 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:58:52.280456 kubelet[2529]: I0213 18:58:52.280424 2529 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 18:58:52.280456 kubelet[2529]: I0213 18:58:52.280451 2529 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 18:58:52.280597 kubelet[2529]: I0213 18:58:52.280473 2529 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:58:52.280696 kubelet[2529]: I0213 18:58:52.280629 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 18:58:52.280696 kubelet[2529]: I0213 18:58:52.280644 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 18:58:52.280751 kubelet[2529]: I0213 18:58:52.280700 2529 policy_none.go:49] "None policy: Start" Feb 13 18:58:52.281330 kubelet[2529]: I0213 18:58:52.281311 2529 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 18:58:52.281330 kubelet[2529]: I0213 18:58:52.281334 2529 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:58:52.281514 kubelet[2529]: I0213 18:58:52.281497 2529 state_mem.go:75] "Updated machine memory state" Feb 13 18:58:52.285162 kubelet[2529]: I0213 18:58:52.285090 2529 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:58:52.285278 kubelet[2529]: I0213 18:58:52.285262 2529 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:58:52.285314 kubelet[2529]: I0213 18:58:52.285279 2529 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:58:52.285909 kubelet[2529]: I0213 18:58:52.285570 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:58:52.353111 kubelet[2529]: E0213 18:58:52.352992 2529 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:52.391740 kubelet[2529]: I0213 18:58:52.391695 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:58:52.406660 kubelet[2529]: I0213 18:58:52.406626 2529 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 18:58:52.406771 kubelet[2529]: I0213 18:58:52.406716 2529 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 18:58:52.428403 kubelet[2529]: I0213 18:58:52.428349 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:58:52.428403 kubelet[2529]: I0213 18:58:52.428400 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:52.428572 kubelet[2529]: I0213 18:58:52.428419 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:52.428572 kubelet[2529]: I0213 18:58:52.428448 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:52.428572 kubelet[2529]: I0213 18:58:52.428468 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/975d5c2d2ab8b6d92464acaac20732ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"975d5c2d2ab8b6d92464acaac20732ca\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:52.428572 kubelet[2529]: I0213 18:58:52.428483 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:52.428572 kubelet[2529]: I0213 18:58:52.428496 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:52.428682 kubelet[2529]: I0213 18:58:52.428521 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:52.428682 kubelet[2529]: I0213 18:58:52.428545 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:58:52.653479 kubelet[2529]: E0213 18:58:52.653339 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:52.653479 kubelet[2529]: E0213 18:58:52.653339 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:52.653479 kubelet[2529]: E0213 18:58:52.653350 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:52.843530 sudo[2565]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 18:58:52.843792 sudo[2565]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 18:58:53.221558 kubelet[2529]: I0213 18:58:53.221247 2529 apiserver.go:52] "Watching apiserver" Feb 13 18:58:53.227959 kubelet[2529]: I0213 18:58:53.227911 2529 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 18:58:53.261278 kubelet[2529]: E0213 18:58:53.258167 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:53.261278 kubelet[2529]: E0213 18:58:53.259154 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:53.265964 kubelet[2529]: E0213 18:58:53.265938 2529 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 18:58:53.266291 kubelet[2529]: E0213 18:58:53.266276 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:53.272981 sudo[2565]: pam_unix(sudo:session): session closed for user root Feb 13 18:58:53.301512 kubelet[2529]: I0213 18:58:53.301271 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.301252649 podStartE2EDuration="3.301252649s" podCreationTimestamp="2025-02-13 18:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:58:53.293582855 +0000 UTC m=+1.124693834" watchObservedRunningTime="2025-02-13 18:58:53.301252649 +0000 UTC m=+1.132363588" Feb 13 18:58:53.313744 kubelet[2529]: I0213 18:58:53.313679 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.313627205 podStartE2EDuration="1.313627205s" podCreationTimestamp="2025-02-13 18:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:58:53.302589777 +0000 UTC m=+1.133700756" watchObservedRunningTime="2025-02-13 18:58:53.313627205 +0000 UTC m=+1.144738184" Feb 13 18:58:54.259159 kubelet[2529]: E0213 18:58:54.259123 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:55.174872 sudo[1629]: pam_unix(sudo:session): session closed for user root Feb 13 18:58:55.177107 sshd[1628]: Connection closed by 10.0.0.1 port 38250 Feb 13 18:58:55.176889 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Feb 13 18:58:55.180253 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:38250.service: Deactivated successfully. Feb 13 18:58:55.182200 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:58:55.182420 systemd[1]: session-7.scope: Consumed 8.420s CPU time, 154.5M memory peak, 0B memory swap peak. Feb 13 18:58:55.184460 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:58:55.186983 systemd-logind[1432]: Removed session 7. Feb 13 18:58:57.913420 kubelet[2529]: E0213 18:58:57.913357 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:57.932942 kubelet[2529]: I0213 18:58:57.932836 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.932821628 podStartE2EDuration="5.932821628s" podCreationTimestamp="2025-02-13 18:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:58:53.313891685 +0000 UTC m=+1.145002704" watchObservedRunningTime="2025-02-13 18:58:57.932821628 +0000 UTC m=+5.763932607" Feb 13 18:58:57.935226 kubelet[2529]: I0213 18:58:57.935207 2529 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 18:58:57.935697 containerd[1445]: time="2025-02-13T18:58:57.935567397Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 18:58:57.935965 kubelet[2529]: I0213 18:58:57.935770 2529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 18:58:58.265284 kubelet[2529]: E0213 18:58:58.265194 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:58.727701 systemd[1]: Created slice kubepods-besteffort-poda3fdcd8e_85b1_4e4b_b977_c56f3009c450.slice - libcontainer container kubepods-besteffort-poda3fdcd8e_85b1_4e4b_b977_c56f3009c450.slice. Feb 13 18:58:58.745232 systemd[1]: Created slice kubepods-burstable-poda2879869_f4c9_4451_bbd2_3e5ac5a899eb.slice - libcontainer container kubepods-burstable-poda2879869_f4c9_4451_bbd2_3e5ac5a899eb.slice. Feb 13 18:58:58.773890 kubelet[2529]: I0213 18:58:58.773843 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hubble-tls\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.773890 kubelet[2529]: I0213 18:58:58.773889 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dl7\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-kube-api-access-j6dl7\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773909 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3fdcd8e-85b1-4e4b-b977-c56f3009c450-xtables-lock\") pod \"kube-proxy-nrrtl\" (UID: \"a3fdcd8e-85b1-4e4b-b977-c56f3009c450\") " pod="kube-system/kube-proxy-nrrtl" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773925 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ltg\" (UniqueName: \"kubernetes.io/projected/a3fdcd8e-85b1-4e4b-b977-c56f3009c450-kube-api-access-s7ltg\") pod \"kube-proxy-nrrtl\" (UID: \"a3fdcd8e-85b1-4e4b-b977-c56f3009c450\") " pod="kube-system/kube-proxy-nrrtl" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773940 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3fdcd8e-85b1-4e4b-b977-c56f3009c450-kube-proxy\") pod \"kube-proxy-nrrtl\" (UID: \"a3fdcd8e-85b1-4e4b-b977-c56f3009c450\") " pod="kube-system/kube-proxy-nrrtl" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773961 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-xtables-lock\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773978 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-cgroup\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774045 kubelet[2529]: I0213 18:58:58.773992 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-run\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774164 kubelet[2529]: I0213 18:58:58.774007 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-etc-cni-netd\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774164 kubelet[2529]: I0213 18:58:58.774022 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3fdcd8e-85b1-4e4b-b977-c56f3009c450-lib-modules\") pod \"kube-proxy-nrrtl\" (UID: \"a3fdcd8e-85b1-4e4b-b977-c56f3009c450\") " pod="kube-system/kube-proxy-nrrtl" Feb 13 18:58:58.774164 kubelet[2529]: I0213 18:58:58.774037 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-lib-modules\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774164 kubelet[2529]: I0213 18:58:58.774051 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-clustermesh-secrets\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774164 kubelet[2529]: I0213 18:58:58.774067 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-net\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774254 kubelet[2529]: I0213 18:58:58.774082 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-kernel\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774254 kubelet[2529]: I0213 18:58:58.774096 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hostproc\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774254 kubelet[2529]: I0213 18:58:58.774110 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cni-path\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774254 kubelet[2529]: I0213 18:58:58.774125 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-bpf-maps\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:58.774254 kubelet[2529]: I0213 18:58:58.774140 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-config-path\") pod \"cilium-b8c7v\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " pod="kube-system/cilium-b8c7v" Feb 13 18:58:59.008102 systemd[1]: Created slice kubepods-besteffort-podacf79cad_5434_4cad_9732_122dd8973263.slice - libcontainer container kubepods-besteffort-podacf79cad_5434_4cad_9732_122dd8973263.slice. Feb 13 18:58:59.041881 kubelet[2529]: E0213 18:58:59.041793 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.043129 containerd[1445]: time="2025-02-13T18:58:59.043072235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrrtl,Uid:a3fdcd8e-85b1-4e4b-b977-c56f3009c450,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:59.048288 kubelet[2529]: E0213 18:58:59.048249 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.048975 containerd[1445]: time="2025-02-13T18:58:59.048721209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8c7v,Uid:a2879869-f4c9-4451-bbd2-3e5ac5a899eb,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:59.062353 containerd[1445]: time="2025-02-13T18:58:59.062195222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:59.062353 containerd[1445]: time="2025-02-13T18:58:59.062300188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:59.062353 containerd[1445]: time="2025-02-13T18:58:59.062332122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.062536 containerd[1445]: time="2025-02-13T18:58:59.062459537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.072398 containerd[1445]: time="2025-02-13T18:58:59.072300932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:59.072495 containerd[1445]: time="2025-02-13T18:58:59.072381207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:59.072495 containerd[1445]: time="2025-02-13T18:58:59.072397694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.073420 containerd[1445]: time="2025-02-13T18:58:59.073360433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.076338 kubelet[2529]: I0213 18:58:59.076303 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ldc\" (UniqueName: \"kubernetes.io/projected/acf79cad-5434-4cad-9732-122dd8973263-kube-api-access-n9ldc\") pod \"cilium-operator-5d85765b45-pxplm\" (UID: \"acf79cad-5434-4cad-9732-122dd8973263\") " pod="kube-system/cilium-operator-5d85765b45-pxplm" Feb 13 18:58:59.076516 kubelet[2529]: I0213 18:58:59.076468 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf79cad-5434-4cad-9732-122dd8973263-cilium-config-path\") pod \"cilium-operator-5d85765b45-pxplm\" (UID: \"acf79cad-5434-4cad-9732-122dd8973263\") " pod="kube-system/cilium-operator-5d85765b45-pxplm" Feb 13 18:58:59.078521 systemd[1]: Started cri-containerd-872d902b7e3d6b7c5347c827acf62b9687988e002ba6ffc5c2b85d58cd332795.scope - libcontainer container 872d902b7e3d6b7c5347c827acf62b9687988e002ba6ffc5c2b85d58cd332795. Feb 13 18:58:59.085247 systemd[1]: Started cri-containerd-a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583.scope - libcontainer container a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583. Feb 13 18:58:59.101336 containerd[1445]: time="2025-02-13T18:58:59.101251069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrrtl,Uid:a3fdcd8e-85b1-4e4b-b977-c56f3009c450,Namespace:kube-system,Attempt:0,} returns sandbox id \"872d902b7e3d6b7c5347c827acf62b9687988e002ba6ffc5c2b85d58cd332795\"" Feb 13 18:58:59.101858 kubelet[2529]: E0213 18:58:59.101834 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.104952 containerd[1445]: time="2025-02-13T18:58:59.104921623Z" level=info msg="CreateContainer within sandbox \"872d902b7e3d6b7c5347c827acf62b9687988e002ba6ffc5c2b85d58cd332795\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 18:58:59.109014 containerd[1445]: time="2025-02-13T18:58:59.108922401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8c7v,Uid:a2879869-f4c9-4451-bbd2-3e5ac5a899eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\"" Feb 13 18:58:59.110659 kubelet[2529]: E0213 18:58:59.110633 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.112652 containerd[1445]: time="2025-02-13T18:58:59.112485349Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 18:58:59.121007 containerd[1445]: time="2025-02-13T18:58:59.120963272Z" level=info msg="CreateContainer within sandbox \"872d902b7e3d6b7c5347c827acf62b9687988e002ba6ffc5c2b85d58cd332795\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"afee428118e6b79f00beaf1409c2d95f0fa2f5a24b04dafddaadd59acf32a228\"" Feb 13 18:58:59.122168 containerd[1445]: time="2025-02-13T18:58:59.122047983Z" level=info msg="StartContainer for \"afee428118e6b79f00beaf1409c2d95f0fa2f5a24b04dafddaadd59acf32a228\"" Feb 13 18:58:59.145553 systemd[1]: Started cri-containerd-afee428118e6b79f00beaf1409c2d95f0fa2f5a24b04dafddaadd59acf32a228.scope - libcontainer container afee428118e6b79f00beaf1409c2d95f0fa2f5a24b04dafddaadd59acf32a228. Feb 13 18:58:59.176482 containerd[1445]: time="2025-02-13T18:58:59.176308114Z" level=info msg="StartContainer for \"afee428118e6b79f00beaf1409c2d95f0fa2f5a24b04dafddaadd59acf32a228\" returns successfully" Feb 13 18:58:59.269755 kubelet[2529]: E0213 18:58:59.268895 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.313564 kubelet[2529]: E0213 18:58:59.313313 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:58:59.315183 containerd[1445]: time="2025-02-13T18:58:59.315103248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pxplm,Uid:acf79cad-5434-4cad-9732-122dd8973263,Namespace:kube-system,Attempt:0,}" Feb 13 18:58:59.339817 containerd[1445]: time="2025-02-13T18:58:59.338533747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:58:59.339817 containerd[1445]: time="2025-02-13T18:58:59.338597814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:58:59.339817 containerd[1445]: time="2025-02-13T18:58:59.338609459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.339817 containerd[1445]: time="2025-02-13T18:58:59.338688854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:58:59.354509 systemd[1]: Started cri-containerd-c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5.scope - libcontainer container c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5. Feb 13 18:58:59.385835 containerd[1445]: time="2025-02-13T18:58:59.385772828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pxplm,Uid:acf79cad-5434-4cad-9732-122dd8973263,Namespace:kube-system,Attempt:0,} returns sandbox id \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\"" Feb 13 18:58:59.386507 kubelet[2529]: E0213 18:58:59.386485 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:00.543570 kubelet[2529]: E0213 18:59:00.543332 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:00.567899 kubelet[2529]: I0213 18:59:00.567846 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nrrtl" podStartSLOduration=2.567827479 podStartE2EDuration="2.567827479s" podCreationTimestamp="2025-02-13 18:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:58:59.27795275 +0000 UTC m=+7.109063769" watchObservedRunningTime="2025-02-13 18:59:00.567827479 +0000 UTC m=+8.398938458" Feb 13 18:59:01.035777 kubelet[2529]: E0213 18:59:01.035433 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:01.273710 kubelet[2529]: E0213 18:59:01.273651 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:01.273710 kubelet[2529]: E0213 18:59:01.273734 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:03.095459 update_engine[1436]: I20250213 18:59:03.095397 1436 update_attempter.cc:509] Updating boot flags... Feb 13 18:59:03.277471 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2909) Feb 13 18:59:03.328401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2911) Feb 13 18:59:03.432140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177508711.mount: Deactivated successfully. Feb 13 18:59:06.593062 containerd[1445]: time="2025-02-13T18:59:06.592924753Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:59:06.593785 containerd[1445]: time="2025-02-13T18:59:06.593698348Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 18:59:06.594468 containerd[1445]: time="2025-02-13T18:59:06.594441854Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:59:06.600920 containerd[1445]: time="2025-02-13T18:59:06.600814630Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.488290546s" Feb 13 18:59:06.600920 containerd[1445]: time="2025-02-13T18:59:06.600864645Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 18:59:06.604613 containerd[1445]: time="2025-02-13T18:59:06.604352865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 18:59:06.609494 containerd[1445]: time="2025-02-13T18:59:06.608944541Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 18:59:06.630069 containerd[1445]: time="2025-02-13T18:59:06.630015584Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\"" Feb 13 18:59:06.631041 containerd[1445]: time="2025-02-13T18:59:06.630941065Z" level=info msg="StartContainer for \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\"" Feb 13 18:59:06.672628 systemd[1]: Started cri-containerd-bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270.scope - libcontainer container bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270. Feb 13 18:59:06.707073 containerd[1445]: time="2025-02-13T18:59:06.707029508Z" level=info msg="StartContainer for \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\" returns successfully" Feb 13 18:59:06.752845 systemd[1]: cri-containerd-bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270.scope: Deactivated successfully. Feb 13 18:59:06.906027 containerd[1445]: time="2025-02-13T18:59:06.893971558Z" level=info msg="shim disconnected" id=bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270 namespace=k8s.io Feb 13 18:59:06.906027 containerd[1445]: time="2025-02-13T18:59:06.905964603Z" level=warning msg="cleaning up after shim disconnected" id=bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270 namespace=k8s.io Feb 13 18:59:06.906027 containerd[1445]: time="2025-02-13T18:59:06.905981368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:59:07.315133 kubelet[2529]: E0213 18:59:07.315086 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:07.318124 containerd[1445]: time="2025-02-13T18:59:07.316922226Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 18:59:07.358212 containerd[1445]: time="2025-02-13T18:59:07.358158648Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\"" Feb 13 18:59:07.358852 containerd[1445]: time="2025-02-13T18:59:07.358817359Z" level=info msg="StartContainer for \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\"" Feb 13 18:59:07.381510 systemd[1]: Started cri-containerd-996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e.scope - libcontainer container 996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e. Feb 13 18:59:07.415699 containerd[1445]: time="2025-02-13T18:59:07.415654259Z" level=info msg="StartContainer for \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\" returns successfully" Feb 13 18:59:07.420822 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:59:07.421032 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:59:07.421105 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:59:07.426746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:59:07.426944 systemd[1]: cri-containerd-996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e.scope: Deactivated successfully. Feb 13 18:59:07.450758 containerd[1445]: time="2025-02-13T18:59:07.450668640Z" level=info msg="shim disconnected" id=996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e namespace=k8s.io Feb 13 18:59:07.450758 containerd[1445]: time="2025-02-13T18:59:07.450721175Z" level=warning msg="cleaning up after shim disconnected" id=996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e namespace=k8s.io Feb 13 18:59:07.450758 containerd[1445]: time="2025-02-13T18:59:07.450730978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:59:07.452552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:59:07.485617 containerd[1445]: time="2025-02-13T18:59:07.485554263Z" level=warning msg="cleanup warnings time=\"2025-02-13T18:59:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 18:59:07.625574 systemd[1]: run-containerd-runc-k8s.io-bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270-runc.c661Bh.mount: Deactivated successfully. Feb 13 18:59:07.625664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270-rootfs.mount: Deactivated successfully. Feb 13 18:59:08.200885 containerd[1445]: time="2025-02-13T18:59:08.200834100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:59:08.201864 containerd[1445]: time="2025-02-13T18:59:08.201675452Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 18:59:08.202441 containerd[1445]: time="2025-02-13T18:59:08.202384168Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:59:08.213785 containerd[1445]: time="2025-02-13T18:59:08.213730302Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.609292651s" Feb 13 18:59:08.213785 containerd[1445]: time="2025-02-13T18:59:08.213779075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 18:59:08.215933 containerd[1445]: time="2025-02-13T18:59:08.215899541Z" level=info msg="CreateContainer within sandbox \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 18:59:08.232209 containerd[1445]: time="2025-02-13T18:59:08.232160512Z" level=info msg="CreateContainer within sandbox \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\"" Feb 13 18:59:08.233152 containerd[1445]: time="2025-02-13T18:59:08.233124739Z" level=info msg="StartContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\"" Feb 13 18:59:08.264555 systemd[1]: Started cri-containerd-689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897.scope - libcontainer container 689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897. Feb 13 18:59:08.289729 containerd[1445]: time="2025-02-13T18:59:08.289607020Z" level=info msg="StartContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" returns successfully" Feb 13 18:59:08.319296 kubelet[2529]: E0213 18:59:08.319258 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:08.322614 containerd[1445]: time="2025-02-13T18:59:08.322574846Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 18:59:08.323819 kubelet[2529]: E0213 18:59:08.323750 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:08.353195 containerd[1445]: time="2025-02-13T18:59:08.353127765Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\"" Feb 13 18:59:08.353720 containerd[1445]: time="2025-02-13T18:59:08.353685439Z" level=info msg="StartContainer for \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\"" Feb 13 18:59:08.361275 kubelet[2529]: I0213 18:59:08.360783 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pxplm" podStartSLOduration=1.5334012970000002 podStartE2EDuration="10.360764874s" podCreationTimestamp="2025-02-13 18:58:58 +0000 UTC" firstStartedPulling="2025-02-13 18:58:59.387122854 +0000 UTC m=+7.218233833" lastFinishedPulling="2025-02-13 18:59:08.214486431 +0000 UTC m=+16.045597410" observedRunningTime="2025-02-13 18:59:08.360479916 +0000 UTC m=+16.191590895" watchObservedRunningTime="2025-02-13 18:59:08.360764874 +0000 UTC m=+16.191875813" Feb 13 18:59:08.391540 systemd[1]: Started cri-containerd-b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a.scope - libcontainer container b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a. Feb 13 18:59:08.434969 containerd[1445]: time="2025-02-13T18:59:08.434922077Z" level=info msg="StartContainer for \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\" returns successfully" Feb 13 18:59:08.437259 systemd[1]: cri-containerd-b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a.scope: Deactivated successfully. Feb 13 18:59:08.533656 containerd[1445]: time="2025-02-13T18:59:08.533485542Z" level=info msg="shim disconnected" id=b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a namespace=k8s.io Feb 13 18:59:08.533656 containerd[1445]: time="2025-02-13T18:59:08.533559802Z" level=warning msg="cleaning up after shim disconnected" id=b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a namespace=k8s.io Feb 13 18:59:08.533656 containerd[1445]: time="2025-02-13T18:59:08.533568805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:59:09.327108 kubelet[2529]: E0213 18:59:09.327002 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:09.329820 kubelet[2529]: E0213 18:59:09.327015 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:09.332903 containerd[1445]: time="2025-02-13T18:59:09.332784716Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 18:59:09.345271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085819099.mount: Deactivated successfully. Feb 13 18:59:09.351221 containerd[1445]: time="2025-02-13T18:59:09.351156680Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\"" Feb 13 18:59:09.352758 containerd[1445]: time="2025-02-13T18:59:09.352100329Z" level=info msg="StartContainer for \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\"" Feb 13 18:59:09.381547 systemd[1]: Started cri-containerd-6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5.scope - libcontainer container 6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5. Feb 13 18:59:09.402713 systemd[1]: cri-containerd-6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5.scope: Deactivated successfully. Feb 13 18:59:09.403408 containerd[1445]: time="2025-02-13T18:59:09.403339638Z" level=info msg="StartContainer for \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\" returns successfully" Feb 13 18:59:09.431689 containerd[1445]: time="2025-02-13T18:59:09.431612252Z" level=info msg="shim disconnected" id=6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5 namespace=k8s.io Feb 13 18:59:09.431689 containerd[1445]: time="2025-02-13T18:59:09.431677910Z" level=warning msg="cleaning up after shim disconnected" id=6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5 namespace=k8s.io Feb 13 18:59:09.431689 containerd[1445]: time="2025-02-13T18:59:09.431687312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:59:09.625255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5-rootfs.mount: Deactivated successfully. Feb 13 18:59:10.335573 kubelet[2529]: E0213 18:59:10.335048 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:10.337621 containerd[1445]: time="2025-02-13T18:59:10.337529555Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 18:59:10.349993 containerd[1445]: time="2025-02-13T18:59:10.349902271Z" level=info msg="CreateContainer within sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\"" Feb 13 18:59:10.351566 containerd[1445]: time="2025-02-13T18:59:10.351524600Z" level=info msg="StartContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\"" Feb 13 18:59:10.379579 systemd[1]: Started cri-containerd-0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c.scope - libcontainer container 0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c. Feb 13 18:59:10.406223 containerd[1445]: time="2025-02-13T18:59:10.406180567Z" level=info msg="StartContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" returns successfully" Feb 13 18:59:10.574918 kubelet[2529]: I0213 18:59:10.574874 2529 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 18:59:10.616213 systemd[1]: Created slice kubepods-burstable-pod02cc0529_3a4d_4062_a70d_1422d61821d1.slice - libcontainer container kubepods-burstable-pod02cc0529_3a4d_4062_a70d_1422d61821d1.slice. Feb 13 18:59:10.623493 systemd[1]: Created slice kubepods-burstable-podcaa49782_dd15_4f65_a6e6_0a5543edc269.slice - libcontainer container kubepods-burstable-podcaa49782_dd15_4f65_a6e6_0a5543edc269.slice. Feb 13 18:59:10.755440 kubelet[2529]: I0213 18:59:10.755386 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wsvv\" (UniqueName: \"kubernetes.io/projected/caa49782-dd15-4f65-a6e6-0a5543edc269-kube-api-access-2wsvv\") pod \"coredns-6f6b679f8f-t6t5d\" (UID: \"caa49782-dd15-4f65-a6e6-0a5543edc269\") " pod="kube-system/coredns-6f6b679f8f-t6t5d" Feb 13 18:59:10.755440 kubelet[2529]: I0213 18:59:10.755441 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caa49782-dd15-4f65-a6e6-0a5543edc269-config-volume\") pod \"coredns-6f6b679f8f-t6t5d\" (UID: \"caa49782-dd15-4f65-a6e6-0a5543edc269\") " pod="kube-system/coredns-6f6b679f8f-t6t5d" Feb 13 18:59:10.755626 kubelet[2529]: I0213 18:59:10.755468 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc0529-3a4d-4062-a70d-1422d61821d1-config-volume\") pod \"coredns-6f6b679f8f-x4xjb\" (UID: \"02cc0529-3a4d-4062-a70d-1422d61821d1\") " pod="kube-system/coredns-6f6b679f8f-x4xjb" Feb 13 18:59:10.755626 kubelet[2529]: I0213 18:59:10.755491 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln4c\" (UniqueName: \"kubernetes.io/projected/02cc0529-3a4d-4062-a70d-1422d61821d1-kube-api-access-hln4c\") pod \"coredns-6f6b679f8f-x4xjb\" (UID: \"02cc0529-3a4d-4062-a70d-1422d61821d1\") " pod="kube-system/coredns-6f6b679f8f-x4xjb" Feb 13 18:59:10.920022 kubelet[2529]: E0213 18:59:10.919889 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:10.924183 containerd[1445]: time="2025-02-13T18:59:10.923826713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x4xjb,Uid:02cc0529-3a4d-4062-a70d-1422d61821d1,Namespace:kube-system,Attempt:0,}" Feb 13 18:59:10.930770 kubelet[2529]: E0213 18:59:10.930726 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:10.931488 containerd[1445]: time="2025-02-13T18:59:10.931434989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t6t5d,Uid:caa49782-dd15-4f65-a6e6-0a5543edc269,Namespace:kube-system,Attempt:0,}" Feb 13 18:59:11.340137 kubelet[2529]: E0213 18:59:11.340077 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:12.346375 kubelet[2529]: E0213 18:59:12.343671 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:12.643872 systemd-networkd[1378]: cilium_host: Link UP Feb 13 18:59:12.644002 systemd-networkd[1378]: cilium_net: Link UP Feb 13 18:59:12.644005 systemd-networkd[1378]: cilium_net: Gained carrier Feb 13 18:59:12.644130 systemd-networkd[1378]: cilium_host: Gained carrier Feb 13 18:59:12.647165 systemd-networkd[1378]: cilium_host: Gained IPv6LL Feb 13 18:59:12.740352 systemd-networkd[1378]: cilium_vxlan: Link UP Feb 13 18:59:12.740734 systemd-networkd[1378]: cilium_vxlan: Gained carrier Feb 13 18:59:13.037401 kernel: NET: Registered PF_ALG protocol family Feb 13 18:59:13.227498 systemd-networkd[1378]: cilium_net: Gained IPv6LL Feb 13 18:59:13.342967 kubelet[2529]: E0213 18:59:13.342860 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:13.610456 systemd-networkd[1378]: lxc_health: Link UP Feb 13 18:59:13.615343 systemd-networkd[1378]: lxc_health: Gained carrier Feb 13 18:59:14.061315 systemd-networkd[1378]: lxcd94e3ca5f64f: Link UP Feb 13 18:59:14.067489 kernel: eth0: renamed from tmp8167e Feb 13 18:59:14.075975 systemd-networkd[1378]: lxc91859d9f16fa: Link UP Feb 13 18:59:14.088699 systemd-networkd[1378]: lxcd94e3ca5f64f: Gained carrier Feb 13 18:59:14.091388 kernel: eth0: renamed from tmpa9a42 Feb 13 18:59:14.096333 systemd-networkd[1378]: lxc91859d9f16fa: Gained carrier Feb 13 18:59:14.123497 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Feb 13 18:59:14.827547 systemd-networkd[1378]: lxc_health: Gained IPv6LL Feb 13 18:59:15.056389 kubelet[2529]: E0213 18:59:15.056332 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:15.073657 kubelet[2529]: I0213 18:59:15.073113 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b8c7v" podStartSLOduration=9.581302465 podStartE2EDuration="17.073095676s" podCreationTimestamp="2025-02-13 18:58:58 +0000 UTC" firstStartedPulling="2025-02-13 18:58:59.11158964 +0000 UTC m=+6.942700619" lastFinishedPulling="2025-02-13 18:59:06.603382851 +0000 UTC m=+14.434493830" observedRunningTime="2025-02-13 18:59:11.380621346 +0000 UTC m=+19.211732325" watchObservedRunningTime="2025-02-13 18:59:15.073095676 +0000 UTC m=+22.904206655" Feb 13 18:59:15.147568 systemd-networkd[1378]: lxc91859d9f16fa: Gained IPv6LL Feb 13 18:59:15.275538 systemd-networkd[1378]: lxcd94e3ca5f64f: Gained IPv6LL Feb 13 18:59:15.346760 kubelet[2529]: E0213 18:59:15.346716 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:16.350390 kubelet[2529]: E0213 18:59:16.350343 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:17.745208 containerd[1445]: time="2025-02-13T18:59:17.745102775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:59:17.745208 containerd[1445]: time="2025-02-13T18:59:17.745169147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:59:17.745208 containerd[1445]: time="2025-02-13T18:59:17.745180350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:59:17.746673 containerd[1445]: time="2025-02-13T18:59:17.745272167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:59:17.760779 containerd[1445]: time="2025-02-13T18:59:17.760696862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:59:17.760779 containerd[1445]: time="2025-02-13T18:59:17.760751272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:59:17.761080 containerd[1445]: time="2025-02-13T18:59:17.760762594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:59:17.761080 containerd[1445]: time="2025-02-13T18:59:17.760839329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:59:17.764540 systemd[1]: Started cri-containerd-a9a426d66c476da12cc9f56d0a138f5812191a408d1abb3294f5735aab0609ea.scope - libcontainer container a9a426d66c476da12cc9f56d0a138f5812191a408d1abb3294f5735aab0609ea. Feb 13 18:59:17.788603 systemd[1]: Started cri-containerd-8167e85095932dd2e5bbf8b96da71f4ba269a7b776dc470590643d5f937a0785.scope - libcontainer container 8167e85095932dd2e5bbf8b96da71f4ba269a7b776dc470590643d5f937a0785. Feb 13 18:59:17.791925 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:59:17.803110 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:59:17.812565 containerd[1445]: time="2025-02-13T18:59:17.812491543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t6t5d,Uid:caa49782-dd15-4f65-a6e6-0a5543edc269,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9a426d66c476da12cc9f56d0a138f5812191a408d1abb3294f5735aab0609ea\"" Feb 13 18:59:17.813395 kubelet[2529]: E0213 18:59:17.813354 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:17.816766 containerd[1445]: time="2025-02-13T18:59:17.816645403Z" level=info msg="CreateContainer within sandbox \"a9a426d66c476da12cc9f56d0a138f5812191a408d1abb3294f5735aab0609ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:59:17.828705 containerd[1445]: time="2025-02-13T18:59:17.828585124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x4xjb,Uid:02cc0529-3a4d-4062-a70d-1422d61821d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8167e85095932dd2e5bbf8b96da71f4ba269a7b776dc470590643d5f937a0785\"" Feb 13 18:59:17.829407 kubelet[2529]: E0213 18:59:17.829380 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:17.830175 containerd[1445]: time="2025-02-13T18:59:17.830133854Z" level=info msg="CreateContainer within sandbox \"a9a426d66c476da12cc9f56d0a138f5812191a408d1abb3294f5735aab0609ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60f02901a398e2a3c71d01ca2afffa5c3e984ac8eccfff890d444744b73e9935\"" Feb 13 18:59:17.830824 containerd[1445]: time="2025-02-13T18:59:17.830610824Z" level=info msg="StartContainer for \"60f02901a398e2a3c71d01ca2afffa5c3e984ac8eccfff890d444744b73e9935\"" Feb 13 18:59:17.831092 containerd[1445]: time="2025-02-13T18:59:17.831061909Z" level=info msg="CreateContainer within sandbox \"8167e85095932dd2e5bbf8b96da71f4ba269a7b776dc470590643d5f937a0785\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:59:17.850697 containerd[1445]: time="2025-02-13T18:59:17.850649105Z" level=info msg="CreateContainer within sandbox \"8167e85095932dd2e5bbf8b96da71f4ba269a7b776dc470590643d5f937a0785\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85e5c7ecf623e8ab6b3c4221106383ab1e277c04d2c613425101919f4b519027\"" Feb 13 18:59:17.851320 containerd[1445]: time="2025-02-13T18:59:17.851293746Z" level=info msg="StartContainer for \"85e5c7ecf623e8ab6b3c4221106383ab1e277c04d2c613425101919f4b519027\"" Feb 13 18:59:17.855609 systemd[1]: Started cri-containerd-60f02901a398e2a3c71d01ca2afffa5c3e984ac8eccfff890d444744b73e9935.scope - libcontainer container 60f02901a398e2a3c71d01ca2afffa5c3e984ac8eccfff890d444744b73e9935. Feb 13 18:59:17.880574 systemd[1]: Started cri-containerd-85e5c7ecf623e8ab6b3c4221106383ab1e277c04d2c613425101919f4b519027.scope - libcontainer container 85e5c7ecf623e8ab6b3c4221106383ab1e277c04d2c613425101919f4b519027. Feb 13 18:59:17.886933 containerd[1445]: time="2025-02-13T18:59:17.886343204Z" level=info msg="StartContainer for \"60f02901a398e2a3c71d01ca2afffa5c3e984ac8eccfff890d444744b73e9935\" returns successfully" Feb 13 18:59:17.926548 containerd[1445]: time="2025-02-13T18:59:17.926331630Z" level=info msg="StartContainer for \"85e5c7ecf623e8ab6b3c4221106383ab1e277c04d2c613425101919f4b519027\" returns successfully" Feb 13 18:59:18.355167 kubelet[2529]: E0213 18:59:18.354975 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:18.358771 kubelet[2529]: E0213 18:59:18.358465 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:18.366985 kubelet[2529]: I0213 18:59:18.366903 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t6t5d" podStartSLOduration=20.366883034 podStartE2EDuration="20.366883034s" podCreationTimestamp="2025-02-13 18:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:59:18.365279864 +0000 UTC m=+26.196390843" watchObservedRunningTime="2025-02-13 18:59:18.366883034 +0000 UTC m=+26.197994013" Feb 13 18:59:18.398934 kubelet[2529]: I0213 18:59:18.398864 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-x4xjb" podStartSLOduration=20.398667176 podStartE2EDuration="20.398667176s" podCreationTimestamp="2025-02-13 18:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:59:18.387251114 +0000 UTC m=+26.218362053" watchObservedRunningTime="2025-02-13 18:59:18.398667176 +0000 UTC m=+26.229778155" Feb 13 18:59:19.360019 kubelet[2529]: E0213 18:59:19.359912 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:19.360019 kubelet[2529]: E0213 18:59:19.359991 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:19.411097 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:54948.service - OpenSSH per-connection server daemon (10.0.0.1:54948). Feb 13 18:59:19.459998 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 54948 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:19.461520 sshd-session[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:19.465653 systemd-logind[1432]: New session 8 of user core. Feb 13 18:59:19.472610 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 18:59:19.593880 sshd[3944]: Connection closed by 10.0.0.1 port 54948 Feb 13 18:59:19.594205 sshd-session[3942]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:19.597194 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:54948.service: Deactivated successfully. Feb 13 18:59:19.599870 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 18:59:19.600707 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Feb 13 18:59:19.601609 systemd-logind[1432]: Removed session 8. Feb 13 18:59:20.363666 kubelet[2529]: E0213 18:59:20.363584 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:20.363666 kubelet[2529]: E0213 18:59:20.363640 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:59:24.606236 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:39830.service - OpenSSH per-connection server daemon (10.0.0.1:39830). Feb 13 18:59:24.652467 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 39830 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:24.653800 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:24.658545 systemd-logind[1432]: New session 9 of user core. Feb 13 18:59:24.664571 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 18:59:24.809761 sshd[3962]: Connection closed by 10.0.0.1 port 39830 Feb 13 18:59:24.809553 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:24.813511 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:39830.service: Deactivated successfully. Feb 13 18:59:24.815077 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 18:59:24.816552 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Feb 13 18:59:24.817673 systemd-logind[1432]: Removed session 9. Feb 13 18:59:29.826094 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:39832.service - OpenSSH per-connection server daemon (10.0.0.1:39832). Feb 13 18:59:29.873892 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:29.875164 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:29.879397 systemd-logind[1432]: New session 10 of user core. Feb 13 18:59:29.888533 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 18:59:30.011114 sshd[3979]: Connection closed by 10.0.0.1 port 39832 Feb 13 18:59:30.011613 sshd-session[3977]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:30.025852 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:39832.service: Deactivated successfully. Feb 13 18:59:30.028804 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 18:59:30.036102 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Feb 13 18:59:30.058000 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:39840.service - OpenSSH per-connection server daemon (10.0.0.1:39840). Feb 13 18:59:30.062553 systemd-logind[1432]: Removed session 10. Feb 13 18:59:30.108719 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:30.110148 sshd-session[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:30.114016 systemd-logind[1432]: New session 11 of user core. Feb 13 18:59:30.125692 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 18:59:30.286122 sshd[3996]: Connection closed by 10.0.0.1 port 39840 Feb 13 18:59:30.285884 sshd-session[3994]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:30.298052 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:39840.service: Deactivated successfully. Feb 13 18:59:30.301911 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 18:59:30.303429 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Feb 13 18:59:30.313616 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:39852.service - OpenSSH per-connection server daemon (10.0.0.1:39852). Feb 13 18:59:30.315435 systemd-logind[1432]: Removed session 11. Feb 13 18:59:30.364302 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 39852 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:30.365565 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:30.370170 systemd-logind[1432]: New session 12 of user core. Feb 13 18:59:30.379539 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 18:59:30.492212 sshd[4009]: Connection closed by 10.0.0.1 port 39852 Feb 13 18:59:30.493585 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:30.495902 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 18:59:30.497553 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Feb 13 18:59:30.498207 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:39852.service: Deactivated successfully. Feb 13 18:59:30.500512 systemd-logind[1432]: Removed session 12. Feb 13 18:59:35.506132 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:46816.service - OpenSSH per-connection server daemon (10.0.0.1:46816). Feb 13 18:59:35.553599 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 46816 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:35.554871 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:35.560087 systemd-logind[1432]: New session 13 of user core. Feb 13 18:59:35.571544 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 18:59:35.688500 sshd[4023]: Connection closed by 10.0.0.1 port 46816 Feb 13 18:59:35.689041 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:35.692632 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:46816.service: Deactivated successfully. Feb 13 18:59:35.694398 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 18:59:35.695967 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Feb 13 18:59:35.696803 systemd-logind[1432]: Removed session 13. Feb 13 18:59:40.702257 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). Feb 13 18:59:40.755686 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:40.757118 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:40.760877 systemd-logind[1432]: New session 14 of user core. Feb 13 18:59:40.773551 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 18:59:40.905096 sshd[4038]: Connection closed by 10.0.0.1 port 46828 Feb 13 18:59:40.906507 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:40.914990 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:46828.service: Deactivated successfully. Feb 13 18:59:40.916604 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 18:59:40.917928 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Feb 13 18:59:40.919456 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:46834.service - OpenSSH per-connection server daemon (10.0.0.1:46834). Feb 13 18:59:40.920177 systemd-logind[1432]: Removed session 14. Feb 13 18:59:40.983616 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 46834 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:40.984835 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:40.988842 systemd-logind[1432]: New session 15 of user core. Feb 13 18:59:40.997576 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 18:59:41.266444 sshd[4053]: Connection closed by 10.0.0.1 port 46834 Feb 13 18:59:41.267186 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:41.274007 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:46834.service: Deactivated successfully. Feb 13 18:59:41.276331 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 18:59:41.277890 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Feb 13 18:59:41.284724 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:46840.service - OpenSSH per-connection server daemon (10.0.0.1:46840). Feb 13 18:59:41.286033 systemd-logind[1432]: Removed session 15. Feb 13 18:59:41.331681 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 46840 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:41.333072 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:41.337090 systemd-logind[1432]: New session 16 of user core. Feb 13 18:59:41.345543 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 18:59:42.713157 sshd[4065]: Connection closed by 10.0.0.1 port 46840 Feb 13 18:59:42.714272 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:42.722460 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:46840.service: Deactivated successfully. Feb 13 18:59:42.724972 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 18:59:42.728150 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Feb 13 18:59:42.733320 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). Feb 13 18:59:42.737884 systemd-logind[1432]: Removed session 16. Feb 13 18:59:42.789016 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:42.790508 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:42.794834 systemd-logind[1432]: New session 17 of user core. Feb 13 18:59:42.805608 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 18:59:43.044437 sshd[4084]: Connection closed by 10.0.0.1 port 47474 Feb 13 18:59:43.045040 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:43.055600 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:47474.service: Deactivated successfully. Feb 13 18:59:43.057636 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 18:59:43.060647 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Feb 13 18:59:43.070718 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:47476.service - OpenSSH per-connection server daemon (10.0.0.1:47476). Feb 13 18:59:43.071727 systemd-logind[1432]: Removed session 17. Feb 13 18:59:43.114870 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 47476 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:43.116153 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:43.120597 systemd-logind[1432]: New session 18 of user core. Feb 13 18:59:43.131569 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 18:59:43.249979 sshd[4096]: Connection closed by 10.0.0.1 port 47476 Feb 13 18:59:43.251163 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:43.255218 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:47476.service: Deactivated successfully. Feb 13 18:59:43.257266 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 18:59:43.258325 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Feb 13 18:59:43.259307 systemd-logind[1432]: Removed session 18. Feb 13 18:59:48.263043 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:47492.service - OpenSSH per-connection server daemon (10.0.0.1:47492). Feb 13 18:59:48.312105 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 47492 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:48.313665 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:48.317851 systemd-logind[1432]: New session 19 of user core. Feb 13 18:59:48.328970 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 18:59:48.457200 sshd[4113]: Connection closed by 10.0.0.1 port 47492 Feb 13 18:59:48.457810 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:48.463251 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:47492.service: Deactivated successfully. Feb 13 18:59:48.465070 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 18:59:48.465868 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Feb 13 18:59:48.466726 systemd-logind[1432]: Removed session 19. Feb 13 18:59:53.473798 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:35088.service - OpenSSH per-connection server daemon (10.0.0.1:35088). Feb 13 18:59:53.519562 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 35088 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:53.520924 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:53.526623 systemd-logind[1432]: New session 20 of user core. Feb 13 18:59:53.534589 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 18:59:53.665295 sshd[4130]: Connection closed by 10.0.0.1 port 35088 Feb 13 18:59:53.665943 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:53.669808 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:35088.service: Deactivated successfully. Feb 13 18:59:53.671641 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 18:59:53.675709 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Feb 13 18:59:53.677396 systemd-logind[1432]: Removed session 20. Feb 13 18:59:58.690828 systemd[1]: Started sshd@20-10.0.0.78:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Feb 13 18:59:58.746469 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:59:58.747129 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:59:58.754095 systemd-logind[1432]: New session 21 of user core. Feb 13 18:59:58.759572 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 18:59:58.890927 sshd[4144]: Connection closed by 10.0.0.1 port 35098 Feb 13 18:59:58.891500 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Feb 13 18:59:58.895677 systemd[1]: sshd@20-10.0.0.78:22-10.0.0.1:35098.service: Deactivated successfully. Feb 13 18:59:58.897943 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 18:59:58.900142 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Feb 13 18:59:58.901413 systemd-logind[1432]: Removed session 21. Feb 13 19:00:03.902760 systemd[1]: Started sshd@21-10.0.0.78:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Feb 13 19:00:03.961767 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:03.963091 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:03.967469 systemd-logind[1432]: New session 22 of user core. Feb 13 19:00:03.977570 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:00:04.109430 sshd[4161]: Connection closed by 10.0.0.1 port 34610 Feb 13 19:00:04.110802 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:04.117823 systemd[1]: sshd@21-10.0.0.78:22-10.0.0.1:34610.service: Deactivated successfully. Feb 13 19:00:04.119584 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:00:04.121414 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:00:04.132964 systemd[1]: Started sshd@22-10.0.0.78:22-10.0.0.1:34626.service - OpenSSH per-connection server daemon (10.0.0.1:34626). Feb 13 19:00:04.134468 systemd-logind[1432]: Removed session 22. Feb 13 19:00:04.179254 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:04.181091 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:04.186032 systemd-logind[1432]: New session 23 of user core. Feb 13 19:00:04.193580 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:00:04.246751 kubelet[2529]: E0213 19:00:04.246708 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:06.618805 containerd[1445]: time="2025-02-13T19:00:06.618732534Z" level=info msg="StopContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" with timeout 30 (s)" Feb 13 19:00:06.620380 containerd[1445]: time="2025-02-13T19:00:06.620176969Z" level=info msg="Stop container \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" with signal terminated" Feb 13 19:00:06.636935 systemd[1]: cri-containerd-689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897.scope: Deactivated successfully. Feb 13 19:00:06.659098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897-rootfs.mount: Deactivated successfully. Feb 13 19:00:06.670565 containerd[1445]: time="2025-02-13T19:00:06.670490104Z" level=info msg="shim disconnected" id=689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897 namespace=k8s.io Feb 13 19:00:06.670565 containerd[1445]: time="2025-02-13T19:00:06.670549901Z" level=warning msg="cleaning up after shim disconnected" id=689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897 namespace=k8s.io Feb 13 19:00:06.670565 containerd[1445]: time="2025-02-13T19:00:06.670559140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:06.683614 containerd[1445]: time="2025-02-13T19:00:06.683553860Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:00:06.702275 containerd[1445]: time="2025-02-13T19:00:06.702236406Z" level=info msg="StopContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" with timeout 2 (s)" Feb 13 19:00:06.702621 containerd[1445]: time="2025-02-13T19:00:06.702593345Z" level=info msg="Stop container \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" with signal terminated" Feb 13 19:00:06.708405 systemd-networkd[1378]: lxc_health: Link DOWN Feb 13 19:00:06.708411 systemd-networkd[1378]: lxc_health: Lost carrier Feb 13 19:00:06.733786 systemd[1]: cri-containerd-0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c.scope: Deactivated successfully. Feb 13 19:00:06.734198 systemd[1]: cri-containerd-0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c.scope: Consumed 6.611s CPU time. Feb 13 19:00:06.742848 containerd[1445]: time="2025-02-13T19:00:06.742780953Z" level=info msg="StopContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" returns successfully" Feb 13 19:00:06.745934 containerd[1445]: time="2025-02-13T19:00:06.745677424Z" level=info msg="StopPodSandbox for \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\"" Feb 13 19:00:06.755995 containerd[1445]: time="2025-02-13T19:00:06.755930904Z" level=info msg="Container to stop \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.759070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c-rootfs.mount: Deactivated successfully. Feb 13 19:00:06.759195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5-shm.mount: Deactivated successfully. Feb 13 19:00:06.766929 systemd[1]: cri-containerd-c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5.scope: Deactivated successfully. Feb 13 19:00:06.768953 containerd[1445]: time="2025-02-13T19:00:06.768879826Z" level=info msg="shim disconnected" id=0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c namespace=k8s.io Feb 13 19:00:06.768953 containerd[1445]: time="2025-02-13T19:00:06.768951982Z" level=warning msg="cleaning up after shim disconnected" id=0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c namespace=k8s.io Feb 13 19:00:06.768953 containerd[1445]: time="2025-02-13T19:00:06.768961301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:06.790411 containerd[1445]: time="2025-02-13T19:00:06.789943673Z" level=info msg="StopContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" returns successfully" Feb 13 19:00:06.791506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5-rootfs.mount: Deactivated successfully. Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792363051Z" level=info msg="StopPodSandbox for \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\"" Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792446806Z" level=info msg="Container to stop \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792480404Z" level=info msg="Container to stop \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792492924Z" level=info msg="Container to stop \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792501443Z" level=info msg="Container to stop \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.792619 containerd[1445]: time="2025-02-13T19:00:06.792510043Z" level=info msg="Container to stop \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:00:06.799129 systemd[1]: cri-containerd-a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583.scope: Deactivated successfully. Feb 13 19:00:06.799906 containerd[1445]: time="2025-02-13T19:00:06.799103617Z" level=info msg="shim disconnected" id=c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5 namespace=k8s.io Feb 13 19:00:06.799906 containerd[1445]: time="2025-02-13T19:00:06.799462916Z" level=warning msg="cleaning up after shim disconnected" id=c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5 namespace=k8s.io Feb 13 19:00:06.799906 containerd[1445]: time="2025-02-13T19:00:06.799482635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:06.813114 containerd[1445]: time="2025-02-13T19:00:06.812955486Z" level=info msg="TearDown network for sandbox \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\" successfully" Feb 13 19:00:06.813114 containerd[1445]: time="2025-02-13T19:00:06.812992724Z" level=info msg="StopPodSandbox for \"c328b69ce43a09e083513450691d3fa27778b408ed36cfb6f0685f026bb283c5\" returns successfully" Feb 13 19:00:06.856505 containerd[1445]: time="2025-02-13T19:00:06.856444221Z" level=info msg="shim disconnected" id=a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583 namespace=k8s.io Feb 13 19:00:06.857038 containerd[1445]: time="2025-02-13T19:00:06.856824679Z" level=warning msg="cleaning up after shim disconnected" id=a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583 namespace=k8s.io Feb 13 19:00:06.857038 containerd[1445]: time="2025-02-13T19:00:06.856844437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:06.877811 containerd[1445]: time="2025-02-13T19:00:06.877639180Z" level=info msg="TearDown network for sandbox \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" successfully" Feb 13 19:00:06.877811 containerd[1445]: time="2025-02-13T19:00:06.877679458Z" level=info msg="StopPodSandbox for \"a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583\" returns successfully" Feb 13 19:00:06.999103 kubelet[2529]: I0213 19:00:06.999047 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ldc\" (UniqueName: \"kubernetes.io/projected/acf79cad-5434-4cad-9732-122dd8973263-kube-api-access-n9ldc\") pod \"acf79cad-5434-4cad-9732-122dd8973263\" (UID: \"acf79cad-5434-4cad-9732-122dd8973263\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999127 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-xtables-lock\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999165 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-run\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999181 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-lib-modules\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999195 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-net\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999212 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hostproc\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999528 kubelet[2529]: I0213 19:00:06.999229 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-etc-cni-netd\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999243 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cni-path\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999261 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-config-path\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999279 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf79cad-5434-4cad-9732-122dd8973263-cilium-config-path\") pod \"acf79cad-5434-4cad-9732-122dd8973263\" (UID: \"acf79cad-5434-4cad-9732-122dd8973263\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999296 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hubble-tls\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999312 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6dl7\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-kube-api-access-j6dl7\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999680 kubelet[2529]: I0213 19:00:06.999326 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-cgroup\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999813 kubelet[2529]: I0213 19:00:06.999345 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-clustermesh-secrets\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999813 kubelet[2529]: I0213 19:00:06.999360 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-kernel\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:06.999813 kubelet[2529]: I0213 19:00:06.999400 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-bpf-maps\") pod \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\" (UID: \"a2879869-f4c9-4451-bbd2-3e5ac5a899eb\") " Feb 13 19:00:07.007837 kubelet[2529]: I0213 19:00:07.007375 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:00:07.007837 kubelet[2529]: I0213 19:00:07.007462 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.007837 kubelet[2529]: I0213 19:00:07.007484 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.007837 kubelet[2529]: I0213 19:00:07.007499 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.007837 kubelet[2529]: I0213 19:00:07.007514 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.008063 kubelet[2529]: I0213 19:00:07.007539 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.008063 kubelet[2529]: I0213 19:00:07.007553 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.008540 kubelet[2529]: I0213 19:00:07.008496 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.010501 kubelet[2529]: I0213 19:00:07.010455 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf79cad-5434-4cad-9732-122dd8973263-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acf79cad-5434-4cad-9732-122dd8973263" (UID: "acf79cad-5434-4cad-9732-122dd8973263"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:00:07.013040 kubelet[2529]: I0213 19:00:07.012764 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf79cad-5434-4cad-9732-122dd8973263-kube-api-access-n9ldc" (OuterVolumeSpecName: "kube-api-access-n9ldc") pod "acf79cad-5434-4cad-9732-122dd8973263" (UID: "acf79cad-5434-4cad-9732-122dd8973263"). InnerVolumeSpecName "kube-api-access-n9ldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:00:07.013040 kubelet[2529]: I0213 19:00:07.012824 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-kube-api-access-j6dl7" (OuterVolumeSpecName: "kube-api-access-j6dl7") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "kube-api-access-j6dl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:00:07.013536 kubelet[2529]: I0213 19:00:07.013406 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.014156 kubelet[2529]: I0213 19:00:07.013990 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.014156 kubelet[2529]: I0213 19:00:07.014122 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:00:07.015062 kubelet[2529]: I0213 19:00:07.014939 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:00:07.016360 kubelet[2529]: I0213 19:00:07.016282 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2879869-f4c9-4451-bbd2-3e5ac5a899eb" (UID: "a2879869-f4c9-4451-bbd2-3e5ac5a899eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:00:07.101742 kubelet[2529]: I0213 19:00:07.101683 2529 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101872 2529 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j6dl7\" (UniqueName: \"kubernetes.io/projected/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-kube-api-access-j6dl7\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101908 2529 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101918 2529 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101927 2529 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101936 2529 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102171 kubelet[2529]: I0213 19:00:07.101944 2529 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n9ldc\" (UniqueName: \"kubernetes.io/projected/acf79cad-5434-4cad-9732-122dd8973263-kube-api-access-n9ldc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.101952 2529 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102308 2529 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102318 2529 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102326 2529 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102358 2529 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102384 2529 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102393 2529 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102581 kubelet[2529]: I0213 19:00:07.102401 2529 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2879869-f4c9-4451-bbd2-3e5ac5a899eb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.102808 kubelet[2529]: I0213 19:00:07.102411 2529 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf79cad-5434-4cad-9732-122dd8973263-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:00:07.303180 kubelet[2529]: E0213 19:00:07.303130 2529 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:00:07.480441 kubelet[2529]: I0213 19:00:07.480218 2529 scope.go:117] "RemoveContainer" containerID="0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c" Feb 13 19:00:07.486391 systemd[1]: Removed slice kubepods-burstable-poda2879869_f4c9_4451_bbd2_3e5ac5a899eb.slice - libcontainer container kubepods-burstable-poda2879869_f4c9_4451_bbd2_3e5ac5a899eb.slice. Feb 13 19:00:07.486487 systemd[1]: kubepods-burstable-poda2879869_f4c9_4451_bbd2_3e5ac5a899eb.slice: Consumed 6.758s CPU time. Feb 13 19:00:07.490401 containerd[1445]: time="2025-02-13T19:00:07.489444230Z" level=info msg="RemoveContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\"" Feb 13 19:00:07.495329 containerd[1445]: time="2025-02-13T19:00:07.495254870Z" level=info msg="RemoveContainer for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" returns successfully" Feb 13 19:00:07.495775 kubelet[2529]: I0213 19:00:07.495646 2529 scope.go:117] "RemoveContainer" containerID="6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5" Feb 13 19:00:07.497287 containerd[1445]: time="2025-02-13T19:00:07.497247721Z" level=info msg="RemoveContainer for \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\"" Feb 13 19:00:07.500322 systemd[1]: Removed slice kubepods-besteffort-podacf79cad_5434_4cad_9732_122dd8973263.slice - libcontainer container kubepods-besteffort-podacf79cad_5434_4cad_9732_122dd8973263.slice. Feb 13 19:00:07.503289 containerd[1445]: time="2025-02-13T19:00:07.503234352Z" level=info msg="RemoveContainer for \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\" returns successfully" Feb 13 19:00:07.504585 kubelet[2529]: I0213 19:00:07.503496 2529 scope.go:117] "RemoveContainer" containerID="b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a" Feb 13 19:00:07.505770 containerd[1445]: time="2025-02-13T19:00:07.505730654Z" level=info msg="RemoveContainer for \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\"" Feb 13 19:00:07.513873 containerd[1445]: time="2025-02-13T19:00:07.513799291Z" level=info msg="RemoveContainer for \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\" returns successfully" Feb 13 19:00:07.514224 kubelet[2529]: I0213 19:00:07.514115 2529 scope.go:117] "RemoveContainer" containerID="996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e" Feb 13 19:00:07.516711 containerd[1445]: time="2025-02-13T19:00:07.516663933Z" level=info msg="RemoveContainer for \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\"" Feb 13 19:00:07.521082 containerd[1445]: time="2025-02-13T19:00:07.521030173Z" level=info msg="RemoveContainer for \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\" returns successfully" Feb 13 19:00:07.521475 kubelet[2529]: I0213 19:00:07.521438 2529 scope.go:117] "RemoveContainer" containerID="bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270" Feb 13 19:00:07.524542 containerd[1445]: time="2025-02-13T19:00:07.524500102Z" level=info msg="RemoveContainer for \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\"" Feb 13 19:00:07.528694 containerd[1445]: time="2025-02-13T19:00:07.528639875Z" level=info msg="RemoveContainer for \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\" returns successfully" Feb 13 19:00:07.529009 kubelet[2529]: I0213 19:00:07.528973 2529 scope.go:117] "RemoveContainer" containerID="0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c" Feb 13 19:00:07.530095 containerd[1445]: time="2025-02-13T19:00:07.529248041Z" level=error msg="ContainerStatus for \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\": not found" Feb 13 19:00:07.538643 kubelet[2529]: E0213 19:00:07.538604 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\": not found" containerID="0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c" Feb 13 19:00:07.539416 kubelet[2529]: I0213 19:00:07.538649 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c"} err="failed to get container status \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fedb0299f559868353a74323003e5699aaf66a8608e17652d5fbff466bf8d1c\": not found" Feb 13 19:00:07.539416 kubelet[2529]: I0213 19:00:07.538952 2529 scope.go:117] "RemoveContainer" containerID="6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5" Feb 13 19:00:07.542552 containerd[1445]: time="2025-02-13T19:00:07.539261211Z" level=error msg="ContainerStatus for \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\": not found" Feb 13 19:00:07.542552 containerd[1445]: time="2025-02-13T19:00:07.540566779Z" level=error msg="ContainerStatus for \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\": not found" Feb 13 19:00:07.542552 containerd[1445]: time="2025-02-13T19:00:07.541003275Z" level=error msg="ContainerStatus for \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\": not found" Feb 13 19:00:07.542552 containerd[1445]: time="2025-02-13T19:00:07.541348616Z" level=error msg="ContainerStatus for \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\": not found" Feb 13 19:00:07.542660 kubelet[2529]: E0213 19:00:07.539521 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\": not found" containerID="6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5" Feb 13 19:00:07.542660 kubelet[2529]: I0213 19:00:07.539557 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5"} err="failed to get container status \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bc30050ecd27d2434feb9397f88d511217d17bb61ef2194fd0cc8382853bda5\": not found" Feb 13 19:00:07.542660 kubelet[2529]: I0213 19:00:07.539576 2529 scope.go:117] "RemoveContainer" containerID="b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a" Feb 13 19:00:07.542660 kubelet[2529]: E0213 19:00:07.540782 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\": not found" containerID="b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a" Feb 13 19:00:07.542660 kubelet[2529]: I0213 19:00:07.540808 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a"} err="failed to get container status \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3a226020fb60109abf9687e27cc14fa238d7f178d4383903f6ce92d6b2b595a\": not found" Feb 13 19:00:07.542660 kubelet[2529]: I0213 19:00:07.540826 2529 scope.go:117] "RemoveContainer" containerID="996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e" Feb 13 19:00:07.542844 kubelet[2529]: E0213 19:00:07.541137 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\": not found" containerID="996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e" Feb 13 19:00:07.542844 kubelet[2529]: I0213 19:00:07.541160 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e"} err="failed to get container status \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\": rpc error: code = NotFound desc = an error occurred when try to find container \"996eb71b89c5cba4fa7c6f47f00244d5e1953b90369f605ea6de823f7ba2123e\": not found" Feb 13 19:00:07.542844 kubelet[2529]: I0213 19:00:07.541182 2529 scope.go:117] "RemoveContainer" containerID="bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270" Feb 13 19:00:07.542844 kubelet[2529]: E0213 19:00:07.541576 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\": not found" containerID="bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270" Feb 13 19:00:07.542844 kubelet[2529]: I0213 19:00:07.541607 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270"} err="failed to get container status \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd2c80f43a07b77b031c4b1fc44749bf0739bcd379099b3196b3ae274f3a5270\": not found" Feb 13 19:00:07.542844 kubelet[2529]: I0213 19:00:07.541630 2529 scope.go:117] "RemoveContainer" containerID="689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897" Feb 13 19:00:07.544003 containerd[1445]: time="2025-02-13T19:00:07.543960113Z" level=info msg="RemoveContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\"" Feb 13 19:00:07.547731 containerd[1445]: time="2025-02-13T19:00:07.547671549Z" level=info msg="RemoveContainer for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" returns successfully" Feb 13 19:00:07.548054 kubelet[2529]: I0213 19:00:07.548000 2529 scope.go:117] "RemoveContainer" containerID="689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897" Feb 13 19:00:07.548329 containerd[1445]: time="2025-02-13T19:00:07.548278715Z" level=error msg="ContainerStatus for \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\": not found" Feb 13 19:00:07.548470 kubelet[2529]: E0213 19:00:07.548434 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\": not found" containerID="689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897" Feb 13 19:00:07.548521 kubelet[2529]: I0213 19:00:07.548474 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897"} err="failed to get container status \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\": rpc error: code = NotFound desc = an error occurred when try to find container \"689c7a32b8506301530a60e8149388c98bb8041d84e343ae27a5388728b62897\": not found" Feb 13 19:00:07.650802 systemd[1]: var-lib-kubelet-pods-acf79cad\x2d5434\x2d4cad\x2d9732\x2d122dd8973263-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn9ldc.mount: Deactivated successfully. Feb 13 19:00:07.650914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583-rootfs.mount: Deactivated successfully. Feb 13 19:00:07.650976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a36dcd999de4482b22b64929da9a75fcb06e661b880d9d17145ab90da055f583-shm.mount: Deactivated successfully. Feb 13 19:00:07.651028 systemd[1]: var-lib-kubelet-pods-a2879869\x2df4c9\x2d4451\x2dbbd2\x2d3e5ac5a899eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6dl7.mount: Deactivated successfully. Feb 13 19:00:07.651081 systemd[1]: var-lib-kubelet-pods-a2879869\x2df4c9\x2d4451\x2dbbd2\x2d3e5ac5a899eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:00:07.651127 systemd[1]: var-lib-kubelet-pods-a2879869\x2df4c9\x2d4451\x2dbbd2\x2d3e5ac5a899eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:00:08.246804 kubelet[2529]: E0213 19:00:08.246761 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:08.250805 kubelet[2529]: I0213 19:00:08.249832 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" path="/var/lib/kubelet/pods/a2879869-f4c9-4451-bbd2-3e5ac5a899eb/volumes" Feb 13 19:00:08.250805 kubelet[2529]: I0213 19:00:08.250516 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf79cad-5434-4cad-9732-122dd8973263" path="/var/lib/kubelet/pods/acf79cad-5434-4cad-9732-122dd8973263/volumes" Feb 13 19:00:08.570313 sshd[4175]: Connection closed by 10.0.0.1 port 34626 Feb 13 19:00:08.570198 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:08.580133 systemd[1]: sshd@22-10.0.0.78:22-10.0.0.1:34626.service: Deactivated successfully. Feb 13 19:00:08.582138 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:00:08.582317 systemd[1]: session-23.scope: Consumed 1.728s CPU time. Feb 13 19:00:08.584067 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:00:08.595884 systemd[1]: Started sshd@23-10.0.0.78:22-10.0.0.1:34636.service - OpenSSH per-connection server daemon (10.0.0.1:34636). Feb 13 19:00:08.596716 systemd-logind[1432]: Removed session 23. Feb 13 19:00:08.641097 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 34636 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:08.642532 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:08.646925 systemd-logind[1432]: New session 24 of user core. Feb 13 19:00:08.657574 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:00:09.443987 sshd[4334]: Connection closed by 10.0.0.1 port 34636 Feb 13 19:00:09.444490 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:09.453648 systemd[1]: sshd@23-10.0.0.78:22-10.0.0.1:34636.service: Deactivated successfully. Feb 13 19:00:09.455984 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:00:09.459710 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:00:09.465764 systemd[1]: Started sshd@24-10.0.0.78:22-10.0.0.1:34648.service - OpenSSH per-connection server daemon (10.0.0.1:34648). Feb 13 19:00:09.471529 systemd-logind[1432]: Removed session 24. Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473622 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="cilium-agent" Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473658 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="apply-sysctl-overwrites" Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473665 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="mount-bpf-fs" Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473672 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="clean-cilium-state" Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473679 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="mount-cgroup" Feb 13 19:00:09.476439 kubelet[2529]: E0213 19:00:09.473685 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="acf79cad-5434-4cad-9732-122dd8973263" containerName="cilium-operator" Feb 13 19:00:09.476439 kubelet[2529]: I0213 19:00:09.473712 2529 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf79cad-5434-4cad-9732-122dd8973263" containerName="cilium-operator" Feb 13 19:00:09.476439 kubelet[2529]: I0213 19:00:09.473719 2529 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2879869-f4c9-4451-bbd2-3e5ac5a899eb" containerName="cilium-agent" Feb 13 19:00:09.486289 systemd[1]: Created slice kubepods-burstable-pod51930edf_7f1d_4b1a_9204_0ed2d4b56001.slice - libcontainer container kubepods-burstable-pod51930edf_7f1d_4b1a_9204_0ed2d4b56001.slice. Feb 13 19:00:09.537805 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 34648 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:09.541815 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:09.549049 systemd-logind[1432]: New session 25 of user core. Feb 13 19:00:09.557764 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:00:09.608580 sshd[4347]: Connection closed by 10.0.0.1 port 34648 Feb 13 19:00:09.608979 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:09.619069 systemd[1]: sshd@24-10.0.0.78:22-10.0.0.1:34648.service: Deactivated successfully. Feb 13 19:00:09.620896 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:00:09.622404 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:00:09.622955 kubelet[2529]: I0213 19:00:09.622925 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhchr\" (UniqueName: \"kubernetes.io/projected/51930edf-7f1d-4b1a-9204-0ed2d4b56001-kube-api-access-jhchr\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623035 kubelet[2529]: I0213 19:00:09.622967 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-hostproc\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623035 kubelet[2529]: I0213 19:00:09.622990 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51930edf-7f1d-4b1a-9204-0ed2d4b56001-hubble-tls\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623035 kubelet[2529]: I0213 19:00:09.623008 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-cni-path\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623035 kubelet[2529]: I0213 19:00:09.623024 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51930edf-7f1d-4b1a-9204-0ed2d4b56001-cilium-config-path\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623040 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-host-proc-sys-kernel\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623056 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-lib-modules\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623072 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51930edf-7f1d-4b1a-9204-0ed2d4b56001-clustermesh-secrets\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623088 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-cilium-run\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623104 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-bpf-maps\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623127 kubelet[2529]: I0213 19:00:09.623120 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51930edf-7f1d-4b1a-9204-0ed2d4b56001-cilium-ipsec-secrets\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623244 kubelet[2529]: I0213 19:00:09.623137 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-host-proc-sys-net\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623244 kubelet[2529]: I0213 19:00:09.623156 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-cilium-cgroup\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623244 kubelet[2529]: I0213 19:00:09.623196 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-etc-cni-netd\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.623986 kubelet[2529]: I0213 19:00:09.623953 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51930edf-7f1d-4b1a-9204-0ed2d4b56001-xtables-lock\") pod \"cilium-2nsdc\" (UID: \"51930edf-7f1d-4b1a-9204-0ed2d4b56001\") " pod="kube-system/cilium-2nsdc" Feb 13 19:00:09.628658 systemd[1]: Started sshd@25-10.0.0.78:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Feb 13 19:00:09.629958 systemd-logind[1432]: Removed session 25. Feb 13 19:00:09.673947 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:09.674892 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:09.680467 systemd-logind[1432]: New session 26 of user core. Feb 13 19:00:09.694555 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:00:09.791022 kubelet[2529]: E0213 19:00:09.790980 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:09.791987 containerd[1445]: time="2025-02-13T19:00:09.791639065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nsdc,Uid:51930edf-7f1d-4b1a-9204-0ed2d4b56001,Namespace:kube-system,Attempt:0,}" Feb 13 19:00:09.816422 containerd[1445]: time="2025-02-13T19:00:09.816058488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:09.816422 containerd[1445]: time="2025-02-13T19:00:09.816172123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:09.816422 containerd[1445]: time="2025-02-13T19:00:09.816191802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:09.816422 containerd[1445]: time="2025-02-13T19:00:09.816300637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:09.837603 systemd[1]: Started cri-containerd-4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66.scope - libcontainer container 4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66. Feb 13 19:00:09.859326 containerd[1445]: time="2025-02-13T19:00:09.859283765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nsdc,Uid:51930edf-7f1d-4b1a-9204-0ed2d4b56001,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\"" Feb 13 19:00:09.860214 kubelet[2529]: E0213 19:00:09.860192 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:09.862715 containerd[1445]: time="2025-02-13T19:00:09.862668082Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:00:09.962764 containerd[1445]: time="2025-02-13T19:00:09.962599505Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b\"" Feb 13 19:00:09.963391 containerd[1445]: time="2025-02-13T19:00:09.963285592Z" level=info msg="StartContainer for \"eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b\"" Feb 13 19:00:09.988610 systemd[1]: Started cri-containerd-eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b.scope - libcontainer container eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b. Feb 13 19:00:10.013114 containerd[1445]: time="2025-02-13T19:00:10.013063793Z" level=info msg="StartContainer for \"eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b\" returns successfully" Feb 13 19:00:10.027100 systemd[1]: cri-containerd-eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b.scope: Deactivated successfully. Feb 13 19:00:10.079239 containerd[1445]: time="2025-02-13T19:00:10.079161781Z" level=info msg="shim disconnected" id=eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b namespace=k8s.io Feb 13 19:00:10.079239 containerd[1445]: time="2025-02-13T19:00:10.079222578Z" level=warning msg="cleaning up after shim disconnected" id=eb67da6411855b47eba4a5f7f7b2b7794d532bb8b13a7b9527ea2706b0db5b6b namespace=k8s.io Feb 13 19:00:10.079239 containerd[1445]: time="2025-02-13T19:00:10.079231538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:10.504782 kubelet[2529]: E0213 19:00:10.504717 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:10.507162 containerd[1445]: time="2025-02-13T19:00:10.507123094Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:00:10.529639 containerd[1445]: time="2025-02-13T19:00:10.529486689Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f\"" Feb 13 19:00:10.531635 containerd[1445]: time="2025-02-13T19:00:10.531595514Z" level=info msg="StartContainer for \"02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f\"" Feb 13 19:00:10.574588 systemd[1]: Started cri-containerd-02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f.scope - libcontainer container 02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f. Feb 13 19:00:10.603491 containerd[1445]: time="2025-02-13T19:00:10.603438083Z" level=info msg="StartContainer for \"02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f\" returns successfully" Feb 13 19:00:10.612980 systemd[1]: cri-containerd-02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f.scope: Deactivated successfully. Feb 13 19:00:10.637060 containerd[1445]: time="2025-02-13T19:00:10.636998133Z" level=info msg="shim disconnected" id=02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f namespace=k8s.io Feb 13 19:00:10.637060 containerd[1445]: time="2025-02-13T19:00:10.637054891Z" level=warning msg="cleaning up after shim disconnected" id=02c2e4432ba06a92dfd70d2b3b647b686288c2d5d51f50b445dc597c55b3543f namespace=k8s.io Feb 13 19:00:10.637060 containerd[1445]: time="2025-02-13T19:00:10.637069450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:11.508232 kubelet[2529]: E0213 19:00:11.507704 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:11.512038 containerd[1445]: time="2025-02-13T19:00:11.511987221Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:00:11.526111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242916984.mount: Deactivated successfully. Feb 13 19:00:11.527635 containerd[1445]: time="2025-02-13T19:00:11.527580888Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3\"" Feb 13 19:00:11.528577 containerd[1445]: time="2025-02-13T19:00:11.528424653Z" level=info msg="StartContainer for \"0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3\"" Feb 13 19:00:11.561611 systemd[1]: Started cri-containerd-0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3.scope - libcontainer container 0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3. Feb 13 19:00:11.591523 containerd[1445]: time="2025-02-13T19:00:11.591468615Z" level=info msg="StartContainer for \"0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3\" returns successfully" Feb 13 19:00:11.591984 systemd[1]: cri-containerd-0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3.scope: Deactivated successfully. Feb 13 19:00:11.618501 containerd[1445]: time="2025-02-13T19:00:11.618314852Z" level=info msg="shim disconnected" id=0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3 namespace=k8s.io Feb 13 19:00:11.618501 containerd[1445]: time="2025-02-13T19:00:11.618379649Z" level=warning msg="cleaning up after shim disconnected" id=0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3 namespace=k8s.io Feb 13 19:00:11.618501 containerd[1445]: time="2025-02-13T19:00:11.618408128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:11.630086 containerd[1445]: time="2025-02-13T19:00:11.629870448Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:00:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:00:11.732901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0afbb52bb0bc74267229c4d603a2fc20a7c0c4d2dfef7c3f0f9dc3ad09bfc4d3-rootfs.mount: Deactivated successfully. Feb 13 19:00:12.304023 kubelet[2529]: E0213 19:00:12.303982 2529 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:00:12.510806 kubelet[2529]: E0213 19:00:12.510747 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:12.514531 containerd[1445]: time="2025-02-13T19:00:12.514319434Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:00:12.524859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460489695.mount: Deactivated successfully. Feb 13 19:00:12.526148 containerd[1445]: time="2025-02-13T19:00:12.526090617Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd\"" Feb 13 19:00:12.526914 containerd[1445]: time="2025-02-13T19:00:12.526880106Z" level=info msg="StartContainer for \"4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd\"" Feb 13 19:00:12.555568 systemd[1]: Started cri-containerd-4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd.scope - libcontainer container 4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd. Feb 13 19:00:12.582523 systemd[1]: cri-containerd-4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd.scope: Deactivated successfully. Feb 13 19:00:12.585101 containerd[1445]: time="2025-02-13T19:00:12.584906574Z" level=info msg="StartContainer for \"4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd\" returns successfully" Feb 13 19:00:12.605751 containerd[1445]: time="2025-02-13T19:00:12.605680287Z" level=info msg="shim disconnected" id=4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd namespace=k8s.io Feb 13 19:00:12.605751 containerd[1445]: time="2025-02-13T19:00:12.605737445Z" level=warning msg="cleaning up after shim disconnected" id=4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd namespace=k8s.io Feb 13 19:00:12.605751 containerd[1445]: time="2025-02-13T19:00:12.605746925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:00:12.732979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b5f3178c5f70428ec48439aad2c4f2c26f7e07c1760c4956ed7ef78d34952dd-rootfs.mount: Deactivated successfully. Feb 13 19:00:13.515992 kubelet[2529]: E0213 19:00:13.515475 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:13.527629 containerd[1445]: time="2025-02-13T19:00:13.527418250Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:00:13.546834 containerd[1445]: time="2025-02-13T19:00:13.546789995Z" level=info msg="CreateContainer within sandbox \"4e7afbe6e025d767be951e12ba0f80e8f97c3894ba7e92e9693b7c57f5983f66\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ff1b5cef75f0742e2f7a6ac4d11f1812787a6c56f99c0a4d6e033a754480311\"" Feb 13 19:00:13.547520 containerd[1445]: time="2025-02-13T19:00:13.547490650Z" level=info msg="StartContainer for \"2ff1b5cef75f0742e2f7a6ac4d11f1812787a6c56f99c0a4d6e033a754480311\"" Feb 13 19:00:13.580628 systemd[1]: Started cri-containerd-2ff1b5cef75f0742e2f7a6ac4d11f1812787a6c56f99c0a4d6e033a754480311.scope - libcontainer container 2ff1b5cef75f0742e2f7a6ac4d11f1812787a6c56f99c0a4d6e033a754480311. Feb 13 19:00:13.607901 containerd[1445]: time="2025-02-13T19:00:13.607760247Z" level=info msg="StartContainer for \"2ff1b5cef75f0742e2f7a6ac4d11f1812787a6c56f99c0a4d6e033a754480311\" returns successfully" Feb 13 19:00:13.908393 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:00:14.170730 kubelet[2529]: I0213 19:00:14.170661 2529 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:00:14Z","lastTransitionTime":"2025-02-13T19:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:00:14.520803 kubelet[2529]: E0213 19:00:14.520758 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:14.544870 kubelet[2529]: I0213 19:00:14.544803 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2nsdc" podStartSLOduration=5.544781175 podStartE2EDuration="5.544781175s" podCreationTimestamp="2025-02-13 19:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:00:14.544462585 +0000 UTC m=+82.375573644" watchObservedRunningTime="2025-02-13 19:00:14.544781175 +0000 UTC m=+82.375892154" Feb 13 19:00:15.793406 kubelet[2529]: E0213 19:00:15.792594 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:16.929146 systemd-networkd[1378]: lxc_health: Link UP Feb 13 19:00:16.935342 systemd-networkd[1378]: lxc_health: Gained carrier Feb 13 19:00:17.793857 kubelet[2529]: E0213 19:00:17.792767 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:18.253461 systemd-networkd[1378]: lxc_health: Gained IPv6LL Feb 13 19:00:18.528578 kubelet[2529]: E0213 19:00:18.528241 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:19.531971 kubelet[2529]: E0213 19:00:19.530057 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:22.605467 sshd[4355]: Connection closed by 10.0.0.1 port 34662 Feb 13 19:00:22.606053 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:22.609330 systemd[1]: sshd@25-10.0.0.78:22-10.0.0.1:34662.service: Deactivated successfully. Feb 13 19:00:22.612074 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:00:22.615425 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:00:22.616794 systemd-logind[1432]: Removed session 26.