Feb 13 15:35:02.902305 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:35:02.902324 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:35:02.902334 kernel: KASLR enabled Feb 13 15:35:02.902340 kernel: efi: EFI v2.7 by EDK II Feb 13 15:35:02.902346 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:35:02.902352 kernel: random: crng init done Feb 13 15:35:02.902359 kernel: secureboot: Secure boot disabled Feb 13 15:35:02.902365 kernel: ACPI: Early table checksum verification disabled Feb 13 15:35:02.902371 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:35:02.902378 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:35:02.902384 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902390 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902396 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902402 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902409 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902417 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902423 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902429 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902435 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:02.902442 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:35:02.902448 kernel: NUMA: Failed to initialise from firmware Feb 13 15:35:02.902454 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:02.902460 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:35:02.902466 kernel: Zone ranges: Feb 13 15:35:02.902473 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:02.902480 kernel: DMA32 empty Feb 13 15:35:02.902486 kernel: Normal empty Feb 13 15:35:02.902492 kernel: Movable zone start for each node Feb 13 15:35:02.902498 kernel: Early memory node ranges Feb 13 15:35:02.902505 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:35:02.902511 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:35:02.902517 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:35:02.902523 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:35:02.902529 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:35:02.902535 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:35:02.902541 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:35:02.902548 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:35:02.902555 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:35:02.902561 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:02.902568 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:35:02.902577 kernel: psci: probing for conduit method from ACPI. Feb 13 15:35:02.902583 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:35:02.902590 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:35:02.902598 kernel: psci: Trusted OS migration not required Feb 13 15:35:02.902604 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:35:02.902611 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:35:02.902618 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:35:02.902624 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:35:02.902631 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:35:02.902638 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:35:02.902644 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:35:02.902651 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:35:02.902658 kernel: CPU features: detected: Spectre-v4 Feb 13 15:35:02.902665 kernel: CPU features: detected: Spectre-BHB Feb 13 15:35:02.902672 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:35:02.902679 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:35:02.902685 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:35:02.902692 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:35:02.902698 kernel: alternatives: applying boot alternatives Feb 13 15:35:02.902706 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:35:02.902713 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:35:02.902720 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:35:02.902726 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:35:02.902733 kernel: Fallback order for Node 0: 0 Feb 13 15:35:02.902741 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:35:02.902748 kernel: Policy zone: DMA Feb 13 15:35:02.902754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:35:02.902761 kernel: software IO TLB: area num 4. Feb 13 15:35:02.902775 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:35:02.902783 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 15:35:02.902790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:35:02.902797 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:35:02.902804 kernel: rcu: RCU event tracing is enabled. Feb 13 15:35:02.902811 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:35:02.902818 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:35:02.902825 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:35:02.902833 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:35:02.902840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:35:02.902847 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:35:02.902853 kernel: GICv3: 256 SPIs implemented Feb 13 15:35:02.902860 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:35:02.902867 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:35:02.902873 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:35:02.902880 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:35:02.902887 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:35:02.902893 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:35:02.902900 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:35:02.902908 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:35:02.902930 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:35:02.902938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:35:02.902944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:02.902951 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:35:02.902958 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:35:02.902965 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:35:02.902971 kernel: arm-pv: using stolen time PV Feb 13 15:35:02.902978 kernel: Console: colour dummy device 80x25 Feb 13 15:35:02.902985 kernel: ACPI: Core revision 20230628 Feb 13 15:35:02.902992 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:35:02.903001 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:35:02.903008 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:35:02.903014 kernel: landlock: Up and running. Feb 13 15:35:02.903021 kernel: SELinux: Initializing. Feb 13 15:35:02.903028 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:02.903035 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:02.903042 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:35:02.903049 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:35:02.903056 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:35:02.903064 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:35:02.903071 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:35:02.903077 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:35:02.903084 kernel: Remapping and enabling EFI services. Feb 13 15:35:02.903091 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:35:02.903098 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:35:02.903105 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:35:02.903112 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:35:02.903118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:02.903126 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:35:02.903133 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:35:02.903145 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:35:02.903154 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:35:02.903161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:02.903168 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:35:02.903175 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:35:02.903182 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:35:02.903189 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:35:02.903198 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:02.903205 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:35:02.903212 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:35:02.903219 kernel: SMP: Total of 4 processors activated. Feb 13 15:35:02.903226 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:35:02.903234 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:35:02.903241 kernel: CPU features: detected: Common not Private translations Feb 13 15:35:02.903248 kernel: CPU features: detected: CRC32 instructions Feb 13 15:35:02.903257 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:35:02.903264 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:35:02.903271 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:35:02.903278 kernel: CPU features: detected: Privileged Access Never Feb 13 15:35:02.903286 kernel: CPU features: detected: RAS Extension Support Feb 13 15:35:02.903293 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:35:02.903300 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:35:02.903308 kernel: alternatives: applying system-wide alternatives Feb 13 15:35:02.903315 kernel: devtmpfs: initialized Feb 13 15:35:02.903323 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:35:02.903331 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:35:02.903338 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:35:02.903345 kernel: SMBIOS 3.0.0 present. Feb 13 15:35:02.903353 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:35:02.903360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:35:02.903367 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:35:02.903374 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:35:02.903382 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:35:02.903389 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:35:02.903397 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Feb 13 15:35:02.903404 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:35:02.903411 kernel: cpuidle: using governor menu Feb 13 15:35:02.903419 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:35:02.903426 kernel: ASID allocator initialised with 32768 entries Feb 13 15:35:02.903434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:35:02.903441 kernel: Serial: AMBA PL011 UART driver Feb 13 15:35:02.903448 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:35:02.903455 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:35:02.903464 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:35:02.903471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:35:02.903478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:35:02.903486 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:35:02.903493 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:35:02.903500 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:35:02.903507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:35:02.903515 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:35:02.903522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:35:02.903530 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:35:02.903538 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:35:02.903545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:35:02.903552 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:35:02.903559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:35:02.903566 kernel: ACPI: Interpreter enabled Feb 13 15:35:02.903573 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:35:02.903580 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:35:02.903587 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:35:02.903596 kernel: printk: console [ttyAMA0] enabled Feb 13 15:35:02.903603 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:35:02.903737 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:35:02.903820 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:35:02.903885 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:35:02.903973 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:35:02.904043 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:35:02.904056 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:35:02.904064 kernel: PCI host bridge to bus 0000:00 Feb 13 15:35:02.904135 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:35:02.904194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:35:02.904252 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:02.904310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:35:02.904391 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:35:02.904469 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:35:02.904538 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:35:02.904606 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:35:02.904673 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:02.904739 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:02.904815 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:35:02.904884 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:35:02.904958 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:35:02.905028 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:35:02.905090 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:02.905099 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:35:02.905113 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:35:02.905120 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:35:02.905128 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:35:02.905138 kernel: iommu: Default domain type: Translated Feb 13 15:35:02.905146 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:35:02.905154 kernel: efivars: Registered efivars operations Feb 13 15:35:02.905161 kernel: vgaarb: loaded Feb 13 15:35:02.905168 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:35:02.905176 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:35:02.905183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:35:02.905191 kernel: pnp: PnP ACPI init Feb 13 15:35:02.905273 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:35:02.905286 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:35:02.905294 kernel: NET: Registered PF_INET protocol family Feb 13 15:35:02.905307 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:35:02.905315 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:35:02.905322 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:35:02.905329 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:35:02.905337 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:35:02.905344 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:35:02.905352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:02.905361 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:02.905368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:35:02.905375 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:35:02.905383 kernel: kvm [1]: HYP mode not available Feb 13 15:35:02.905390 kernel: Initialise system trusted keyrings Feb 13 15:35:02.905397 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:35:02.905405 kernel: Key type asymmetric registered Feb 13 15:35:02.905412 kernel: Asymmetric key parser 'x509' registered Feb 13 15:35:02.905419 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:35:02.905428 kernel: io scheduler mq-deadline registered Feb 13 15:35:02.905435 kernel: io scheduler kyber registered Feb 13 15:35:02.905443 kernel: io scheduler bfq registered Feb 13 15:35:02.905450 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:35:02.905458 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:35:02.905466 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:35:02.905537 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:35:02.905547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:35:02.905554 kernel: thunder_xcv, ver 1.0 Feb 13 15:35:02.905563 kernel: thunder_bgx, ver 1.0 Feb 13 15:35:02.905570 kernel: nicpf, ver 1.0 Feb 13 15:35:02.905578 kernel: nicvf, ver 1.0 Feb 13 15:35:02.905652 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:35:02.905724 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:35:02 UTC (1739460902) Feb 13 15:35:02.905734 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:35:02.905742 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:35:02.905750 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:35:02.905760 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:35:02.905773 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:35:02.905782 kernel: Segment Routing with IPv6 Feb 13 15:35:02.905789 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:35:02.905797 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:35:02.905804 kernel: Key type dns_resolver registered Feb 13 15:35:02.905812 kernel: registered taskstats version 1 Feb 13 15:35:02.905819 kernel: Loading compiled-in X.509 certificates Feb 13 15:35:02.905827 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:35:02.905836 kernel: Key type .fscrypt registered Feb 13 15:35:02.905844 kernel: Key type fscrypt-provisioning registered Feb 13 15:35:02.905851 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:35:02.905858 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:35:02.905866 kernel: ima: No architecture policies found Feb 13 15:35:02.905873 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:35:02.905881 kernel: clk: Disabling unused clocks Feb 13 15:35:02.905888 kernel: Freeing unused kernel memory: 39936K Feb 13 15:35:02.905896 kernel: Run /init as init process Feb 13 15:35:02.905904 kernel: with arguments: Feb 13 15:35:02.905912 kernel: /init Feb 13 15:35:02.905929 kernel: with environment: Feb 13 15:35:02.905936 kernel: HOME=/ Feb 13 15:35:02.905944 kernel: TERM=linux Feb 13 15:35:02.905951 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:35:02.905960 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:02.905969 systemd[1]: Detected virtualization kvm. Feb 13 15:35:02.905980 systemd[1]: Detected architecture arm64. Feb 13 15:35:02.905988 systemd[1]: Running in initrd. Feb 13 15:35:02.905995 systemd[1]: No hostname configured, using default hostname. Feb 13 15:35:02.906003 systemd[1]: Hostname set to . Feb 13 15:35:02.906011 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:02.906019 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:35:02.906027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:02.906035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:02.906045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:35:02.906053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:02.906062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:35:02.906070 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:35:02.906079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:35:02.906087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:35:02.906097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:02.906105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:02.906113 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:35:02.906122 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:02.906130 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:02.906137 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:35:02.906146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:02.906154 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:02.906162 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:35:02.906171 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:35:02.906179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:02.906187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:02.906195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:02.906203 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:35:02.906210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:35:02.906218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:02.906226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:35:02.906236 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:35:02.906244 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:02.906252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:02.906260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:02.906268 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:02.906276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:02.906284 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:35:02.906295 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:35:02.906304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:02.906312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:02.906320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:35:02.906346 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:35:02.906368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:35:02.906377 systemd-journald[238]: Journal started Feb 13 15:35:02.908251 systemd-journald[238]: Runtime Journal (/run/log/journal/006657b7149942ff84f9145f1b58a499) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:35:02.889238 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:35:02.911320 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:02.911663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:02.915944 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:35:02.917155 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:35:02.917944 kernel: Bridge firewalling registered Feb 13 15:35:02.923149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:35:02.924183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:02.925722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:02.928103 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:35:02.930146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:02.932946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:02.943748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:02.949643 dracut-cmdline[268]: dracut-dracut-053 Feb 13 15:35:02.953413 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:35:02.952067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:35:02.980860 systemd-resolved[281]: Positive Trust Anchors: Feb 13 15:35:02.980878 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:35:02.980910 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:35:02.985735 systemd-resolved[281]: Defaulting to hostname 'linux'. Feb 13 15:35:02.986816 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:35:02.987962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:03.025954 kernel: SCSI subsystem initialized Feb 13 15:35:03.030942 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:35:03.037947 kernel: iscsi: registered transport (tcp) Feb 13 15:35:03.050204 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:35:03.050220 kernel: QLogic iSCSI HBA Driver Feb 13 15:35:03.090602 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:03.100079 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:35:03.117249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:35:03.117292 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:35:03.118071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:35:03.162945 kernel: raid6: neonx8 gen() 15792 MB/s Feb 13 15:35:03.179931 kernel: raid6: neonx4 gen() 15814 MB/s Feb 13 15:35:03.196927 kernel: raid6: neonx2 gen() 13207 MB/s Feb 13 15:35:03.213930 kernel: raid6: neonx1 gen() 10541 MB/s Feb 13 15:35:03.230935 kernel: raid6: int64x8 gen() 6789 MB/s Feb 13 15:35:03.247927 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 15:35:03.264930 kernel: raid6: int64x2 gen() 6112 MB/s Feb 13 15:35:03.281930 kernel: raid6: int64x1 gen() 5052 MB/s Feb 13 15:35:03.281944 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Feb 13 15:35:03.298940 kernel: raid6: .... xor() 12524 MB/s, rmw enabled Feb 13 15:35:03.298963 kernel: raid6: using neon recovery algorithm Feb 13 15:35:03.304252 kernel: xor: measuring software checksum speed Feb 13 15:35:03.304269 kernel: 8regs : 21601 MB/sec Feb 13 15:35:03.304278 kernel: 32regs : 21243 MB/sec Feb 13 15:35:03.305185 kernel: arm64_neon : 27794 MB/sec Feb 13 15:35:03.305198 kernel: xor: using function: arm64_neon (27794 MB/sec) Feb 13 15:35:03.353940 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:35:03.365024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:03.377064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:03.387839 systemd-udevd[458]: Using default interface naming scheme 'v255'. Feb 13 15:35:03.390892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:03.405114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:35:03.416724 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 13 15:35:03.442538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:03.451167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:03.493708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:03.502180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:35:03.517954 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:03.519655 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:03.520889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:03.523674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:03.534074 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:35:03.540602 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:35:03.553429 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:35:03.553555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:35:03.553569 kernel: GPT:9289727 != 19775487 Feb 13 15:35:03.553580 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:35:03.553596 kernel: GPT:9289727 != 19775487 Feb 13 15:35:03.553606 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:35:03.553617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:03.545420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:03.556340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:03.556465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:03.559351 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:03.564891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:03.579080 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (501) Feb 13 15:35:03.565078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:03.576509 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:03.583940 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Feb 13 15:35:03.590162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:03.601589 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:35:03.603613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:03.609436 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:35:03.614197 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:35:03.616085 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:35:03.626394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:35:03.642129 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:35:03.644151 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:03.648986 disk-uuid[548]: Primary Header is updated. Feb 13 15:35:03.648986 disk-uuid[548]: Secondary Entries is updated. Feb 13 15:35:03.648986 disk-uuid[548]: Secondary Header is updated. Feb 13 15:35:03.651448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:03.673776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:04.664161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:04.664211 disk-uuid[550]: The operation has completed successfully. Feb 13 15:35:04.690843 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:35:04.690962 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:35:04.710079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:35:04.713784 sh[569]: Success Feb 13 15:35:04.728944 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:35:04.759773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:35:04.771277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:35:04.772655 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:35:04.785571 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:35:04.785608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:04.785619 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:35:04.786989 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:35:04.787016 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:35:04.790828 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:35:04.792163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:35:04.808143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:35:04.810334 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:35:04.817030 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:35:04.817067 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:04.818095 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:04.819939 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:04.827249 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:35:04.828596 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:35:04.869111 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:35:04.882105 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:35:04.898952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:04.913108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:35:04.944656 systemd-networkd[752]: lo: Link UP Feb 13 15:35:04.944669 systemd-networkd[752]: lo: Gained carrier Feb 13 15:35:04.945454 systemd-networkd[752]: Enumeration completed Feb 13 15:35:04.945743 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:35:04.945840 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:04.945843 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:04.946865 systemd[1]: Reached target network.target - Network. Feb 13 15:35:04.948323 systemd-networkd[752]: eth0: Link UP Feb 13 15:35:04.948326 systemd-networkd[752]: eth0: Gained carrier Feb 13 15:35:04.948333 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:04.969985 systemd-networkd[752]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:35:04.991061 ignition[720]: Ignition 2.20.0 Feb 13 15:35:04.991071 ignition[720]: Stage: fetch-offline Feb 13 15:35:04.991106 ignition[720]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:04.991114 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:04.991263 ignition[720]: parsed url from cmdline: "" Feb 13 15:35:04.991266 ignition[720]: no config URL provided Feb 13 15:35:04.991271 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:35:04.991277 ignition[720]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:35:04.991303 ignition[720]: op(1): [started] loading QEMU firmware config module Feb 13 15:35:04.991307 ignition[720]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:35:04.997639 ignition[720]: op(1): [finished] loading QEMU firmware config module Feb 13 15:35:04.997664 ignition[720]: QEMU firmware config was not found. Ignoring... Feb 13 15:35:05.035487 ignition[720]: parsing config with SHA512: c034ff3911caf274d5083acd92010d3f0ac2891a30cb868efe14b53ee69f5b15cd7c594bddd41299e13ada4a46a5cccc1dcbd94df7f1c4260b0922fd7654502e Feb 13 15:35:05.040323 unknown[720]: fetched base config from "system" Feb 13 15:35:05.040332 unknown[720]: fetched user config from "qemu" Feb 13 15:35:05.042393 ignition[720]: fetch-offline: fetch-offline passed Feb 13 15:35:05.042499 ignition[720]: Ignition finished successfully Feb 13 15:35:05.044111 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:05.046281 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:35:05.059093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:35:05.069543 ignition[765]: Ignition 2.20.0 Feb 13 15:35:05.069554 ignition[765]: Stage: kargs Feb 13 15:35:05.069721 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:05.069731 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:05.070637 ignition[765]: kargs: kargs passed Feb 13 15:35:05.072686 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:35:05.070683 ignition[765]: Ignition finished successfully Feb 13 15:35:05.081085 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:35:05.091158 ignition[775]: Ignition 2.20.0 Feb 13 15:35:05.091168 ignition[775]: Stage: disks Feb 13 15:35:05.091336 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:05.091346 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:05.092258 ignition[775]: disks: disks passed Feb 13 15:35:05.093531 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:35:05.092302 ignition[775]: Ignition finished successfully Feb 13 15:35:05.094562 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:05.095531 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:35:05.097014 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:05.098158 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:35:05.099576 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:35:05.113087 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:35:05.123610 systemd-fsck[785]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:35:05.127191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:35:05.129976 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:35:05.174944 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:35:05.175505 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:35:05.176531 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:05.186999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:05.188487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:35:05.189495 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:35:05.189564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:35:05.189590 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:05.195979 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (793) Feb 13 15:35:05.195453 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:35:05.197354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:35:05.200267 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:35:05.200290 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:05.200300 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:05.202932 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:05.204021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:05.241638 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:35:05.245890 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:35:05.249618 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:35:05.253008 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:35:05.334868 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:05.344018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:35:05.345358 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:35:05.350933 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:35:05.365213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:35:05.370284 ignition[907]: INFO : Ignition 2.20.0 Feb 13 15:35:05.370284 ignition[907]: INFO : Stage: mount Feb 13 15:35:05.372157 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:05.372157 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:05.372157 ignition[907]: INFO : mount: mount passed Feb 13 15:35:05.372157 ignition[907]: INFO : Ignition finished successfully Feb 13 15:35:05.372888 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:35:05.384055 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:35:05.784954 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:35:05.797100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:05.803298 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Feb 13 15:35:05.803327 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:35:05.803337 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:05.804930 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:05.806951 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:05.807320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:05.822661 ignition[936]: INFO : Ignition 2.20.0 Feb 13 15:35:05.822661 ignition[936]: INFO : Stage: files Feb 13 15:35:05.824239 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:05.824239 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:05.824239 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:35:05.827511 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:35:05.827511 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:35:05.827511 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:35:05.827511 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:35:05.827511 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:35:05.826966 unknown[936]: wrote ssh authorized keys file for user: core Feb 13 15:35:05.834478 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:05.834478 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:35:05.944391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:35:06.130803 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:06.130803 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:35:06.133844 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:35:06.437195 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:35:06.442720 systemd-networkd[752]: eth0: Gained IPv6LL Feb 13 15:35:06.523548 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:06.525354 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:06.543389 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:06.543389 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:06.543389 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:35:06.790392 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:35:07.002562 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:07.002562 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:35:07.006248 ignition[936]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:35:07.030929 ignition[936]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:35:07.034710 ignition[936]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:35:07.037225 ignition[936]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:35:07.037225 ignition[936]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:07.037225 ignition[936]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:07.037225 ignition[936]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:07.037225 ignition[936]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:07.037225 ignition[936]: INFO : files: files passed Feb 13 15:35:07.037225 ignition[936]: INFO : Ignition finished successfully Feb 13 15:35:07.038974 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:35:07.050217 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:35:07.052779 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:35:07.057470 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:35:07.057575 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:35:07.062057 initrd-setup-root-after-ignition[965]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:35:07.064704 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:07.064704 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:07.067540 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:07.067973 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:07.071311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:35:07.078136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:35:07.099912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:35:07.100934 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:35:07.102063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:35:07.105176 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:35:07.106590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:35:07.107435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:35:07.123716 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:07.136111 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:35:07.144525 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:07.145479 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:07.147021 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:35:07.148493 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:35:07.148618 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:07.150614 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:35:07.152171 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:35:07.153596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:35:07.154840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:07.156334 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:07.157811 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:35:07.159200 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:07.160677 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:35:07.162119 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:35:07.163377 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:35:07.164496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:35:07.164624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:07.166449 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:07.167922 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:07.169364 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:35:07.171209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:07.172262 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:35:07.172381 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:07.174378 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:35:07.174490 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:07.175903 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:35:07.177080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:35:07.182912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:07.183867 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:35:07.185436 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:35:07.186611 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:35:07.186701 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:07.187797 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:35:07.187874 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:07.188980 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:35:07.189083 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:07.190368 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:35:07.190466 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:35:07.209776 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:35:07.210478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:35:07.210604 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:07.214491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:35:07.215873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:35:07.216734 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:07.217695 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:35:07.217805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:07.222566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:35:07.223946 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:35:07.227331 ignition[991]: INFO : Ignition 2.20.0 Feb 13 15:35:07.227331 ignition[991]: INFO : Stage: umount Feb 13 15:35:07.227331 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:07.227331 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:07.231076 ignition[991]: INFO : umount: umount passed Feb 13 15:35:07.231076 ignition[991]: INFO : Ignition finished successfully Feb 13 15:35:07.228327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:35:07.230183 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:35:07.231950 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:35:07.233414 systemd[1]: Stopped target network.target - Network. Feb 13 15:35:07.234415 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:35:07.234470 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:35:07.235815 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:35:07.235858 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:35:07.237105 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:35:07.237146 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:35:07.238340 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:35:07.238379 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:07.239851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:35:07.241102 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:35:07.246006 systemd-networkd[752]: eth0: DHCPv6 lease lost Feb 13 15:35:07.247840 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:35:07.249955 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:35:07.251512 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:35:07.251705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:35:07.254526 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:35:07.254588 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:07.268383 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:35:07.269080 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:35:07.269143 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:07.270645 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:35:07.270683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:07.272016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:35:07.272055 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:07.273834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:35:07.273875 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:07.275428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:07.284531 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:35:07.284647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:35:07.292566 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:35:07.292708 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:07.294686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:35:07.294744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:07.295850 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:35:07.295879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:07.297243 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:35:07.297290 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:07.299260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:35:07.299301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:07.301272 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:07.301316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:07.318088 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:35:07.318861 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:35:07.318962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:07.320565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:07.320605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:07.321735 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:35:07.321838 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:35:07.323316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:35:07.323410 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:07.325843 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:35:07.325985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:35:07.327535 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:35:07.329426 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:35:07.339329 systemd[1]: Switching root. Feb 13 15:35:07.361685 systemd-journald[238]: Journal stopped Feb 13 15:35:08.141972 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:35:08.142027 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:35:08.142039 kernel: SELinux: policy capability open_perms=1 Feb 13 15:35:08.142048 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:35:08.142064 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:35:08.142074 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:35:08.142083 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:35:08.142092 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:35:08.142101 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:35:08.142110 kernel: audit: type=1403 audit(1739460907.551:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:35:08.142121 systemd[1]: Successfully loaded SELinux policy in 35.875ms. Feb 13 15:35:08.142137 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.680ms. Feb 13 15:35:08.142148 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:08.142161 systemd[1]: Detected virtualization kvm. Feb 13 15:35:08.142171 systemd[1]: Detected architecture arm64. Feb 13 15:35:08.142181 systemd[1]: Detected first boot. Feb 13 15:35:08.142191 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:08.142201 zram_generator::config[1037]: No configuration found. Feb 13 15:35:08.142212 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:35:08.142222 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:35:08.142238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:35:08.142249 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:08.142260 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:35:08.142270 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:35:08.142280 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:35:08.142296 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:35:08.142307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:35:08.142317 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:35:08.142328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:35:08.142338 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:35:08.142349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:08.142360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:08.142370 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:35:08.142380 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:35:08.142390 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:35:08.142401 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:08.142411 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:35:08.142421 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:08.142431 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:35:08.142443 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:35:08.142453 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:08.142464 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:35:08.142474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:08.142484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:08.142494 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:08.142504 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:08.142514 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:35:08.142526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:35:08.142536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:08.142546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:08.142556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:08.142566 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:35:08.142576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:35:08.142586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:35:08.142596 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:35:08.142606 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:35:08.142617 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:35:08.142627 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:35:08.142643 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:35:08.142654 systemd[1]: Reached target machines.target - Containers. Feb 13 15:35:08.142664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:35:08.142674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:08.142686 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:08.142697 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:35:08.142708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:08.142719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:35:08.142729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:08.142739 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:35:08.142754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:08.142766 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:35:08.142776 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:35:08.142786 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:35:08.142795 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:35:08.142807 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:35:08.142817 kernel: fuse: init (API version 7.39) Feb 13 15:35:08.142826 kernel: loop: module loaded Feb 13 15:35:08.142835 kernel: ACPI: bus type drm_connector registered Feb 13 15:35:08.142844 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:08.142855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:08.142865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:35:08.142875 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:35:08.142889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:08.142901 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:35:08.142912 systemd[1]: Stopped verity-setup.service. Feb 13 15:35:08.142928 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:35:08.142938 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:35:08.142948 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:35:08.142979 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 15:35:08.143005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:35:08.143015 systemd-journald[1111]: Journal started Feb 13 15:35:08.143039 systemd-journald[1111]: Runtime Journal (/run/log/journal/006657b7149942ff84f9145f1b58a499) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:35:07.930197 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:35:07.952217 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:35:07.952593 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:35:08.144631 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:35:08.146190 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:08.146772 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:35:08.149031 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:35:08.150311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:08.151697 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:35:08.151871 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:35:08.153330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:08.153497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:08.154737 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:35:08.154895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:35:08.156345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:08.156530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:08.158010 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:35:08.158161 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:35:08.159309 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:08.159544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:08.160902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:08.162236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:35:08.163598 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:35:08.178126 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:35:08.191088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:35:08.193208 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:35:08.194280 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:35:08.194343 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:08.196228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:35:08.198608 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:35:08.200793 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:35:08.201694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:08.203539 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:35:08.205564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:35:08.206774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:08.211246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:35:08.212220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:08.213166 systemd-journald[1111]: Time spent on flushing to /var/log/journal/006657b7149942ff84f9145f1b58a499 is 12.114ms for 859 entries. Feb 13 15:35:08.213166 systemd-journald[1111]: System Journal (/var/log/journal/006657b7149942ff84f9145f1b58a499) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:35:08.245577 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 15:35:08.214288 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:08.218186 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:35:08.223260 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:35:08.226477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:08.248155 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 15:35:08.227860 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:35:08.229063 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:35:08.230412 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:35:08.238066 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:35:08.241932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:35:08.247196 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:35:08.253348 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:35:08.254892 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:35:08.256303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:08.270327 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:35:08.274019 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:35:08.275709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:35:08.286549 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:35:08.293841 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:35:08.294603 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:35:08.307194 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 15:35:08.308138 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 15:35:08.308156 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 15:35:08.312466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:08.334952 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 15:35:08.365944 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 15:35:08.373018 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 15:35:08.378944 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 15:35:08.381719 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:35:08.382138 (sd-merge)[1175]: Merged extensions into '/usr'. Feb 13 15:35:08.389788 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:35:08.390503 systemd[1]: Reloading... Feb 13 15:35:08.436955 zram_generator::config[1201]: No configuration found. Feb 13 15:35:08.508460 ldconfig[1143]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:35:08.546484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:08.581960 systemd[1]: Reloading finished in 190 ms. Feb 13 15:35:08.615315 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:35:08.616561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:35:08.632192 systemd[1]: Starting ensure-sysext.service... Feb 13 15:35:08.633882 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:35:08.648526 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:35:08.648545 systemd[1]: Reloading... Feb 13 15:35:08.651557 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:35:08.652051 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:35:08.652760 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:35:08.653081 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 15:35:08.653208 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 15:35:08.655767 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:35:08.655863 systemd-tmpfiles[1239]: Skipping /boot Feb 13 15:35:08.663670 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:35:08.663800 systemd-tmpfiles[1239]: Skipping /boot Feb 13 15:35:08.693963 zram_generator::config[1269]: No configuration found. Feb 13 15:35:08.770115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:08.805389 systemd[1]: Reloading finished in 156 ms. Feb 13 15:35:08.819782 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:35:08.832324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:08.839669 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:35:08.842044 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:35:08.844041 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:35:08.849283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:35:08.853214 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:08.856393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:35:08.859728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:08.862460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:08.865666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:08.869055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:08.869857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:08.874431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:08.876788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:08.878467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:08.878590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:08.880191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:35:08.881640 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:08.881777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:08.886643 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 15:35:08.886904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:08.887279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:08.897288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:35:08.900156 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:35:08.904644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:35:08.906164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:35:08.907570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:08.920366 augenrules[1350]: No rules Feb 13 15:35:08.920836 systemd[1]: Finished ensure-sysext.service. Feb 13 15:35:08.922047 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:35:08.922723 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:35:08.929025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:08.936110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:08.940151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:35:08.951803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:08.954672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:08.956528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:08.958563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:35:08.963201 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:35:08.964048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:35:08.964373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:35:08.966340 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:35:08.967581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:08.967715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:08.969026 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:35:08.969211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:35:08.970417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:08.970540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:08.971693 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:08.971846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:08.978301 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:35:08.978728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:08.978779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:09.007945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1339) Feb 13 15:35:09.018666 systemd-resolved[1305]: Positive Trust Anchors: Feb 13 15:35:09.018728 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:35:09.018767 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:35:09.027943 systemd-resolved[1305]: Defaulting to hostname 'linux'. Feb 13 15:35:09.031216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:35:09.032272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:09.048288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:35:09.057118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:35:09.065378 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:35:09.067158 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:35:09.074331 systemd-networkd[1375]: lo: Link UP Feb 13 15:35:09.074581 systemd-networkd[1375]: lo: Gained carrier Feb 13 15:35:09.075553 systemd-networkd[1375]: Enumeration completed Feb 13 15:35:09.075712 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:35:09.076633 systemd[1]: Reached target network.target - Network. Feb 13 15:35:09.083594 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:09.084011 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:09.084863 systemd-networkd[1375]: eth0: Link UP Feb 13 15:35:09.085019 systemd-networkd[1375]: eth0: Gained carrier Feb 13 15:35:09.085091 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:09.086126 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:35:09.087519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:35:09.094982 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:35:09.096161 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Feb 13 15:35:09.097330 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:35:09.097388 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-02-13 15:35:08.798313 UTC. Feb 13 15:35:09.111167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:09.116544 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:35:09.121008 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:35:09.155684 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:35:09.161967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:09.198015 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:35:09.199348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:09.200258 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:35:09.201180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:35:09.202227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:35:09.203435 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:35:09.204397 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:35:09.205440 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:35:09.206487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:35:09.206524 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:35:09.207213 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:35:09.208806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:35:09.211007 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:35:09.221818 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:35:09.224026 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:35:09.225343 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:35:09.226431 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:35:09.227264 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:35:09.228027 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:35:09.228057 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:35:09.229201 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:35:09.231042 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:35:09.232186 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:35:09.233089 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:35:09.236552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:35:09.239995 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:35:09.241237 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:35:09.244985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:35:09.248310 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:35:09.251904 jq[1410]: false Feb 13 15:35:09.255119 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:35:09.260118 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:35:09.260300 extend-filesystems[1411]: Found loop3 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found loop4 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found loop5 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda1 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda2 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda3 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found usr Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda4 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda6 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda7 Feb 13 15:35:09.263516 extend-filesystems[1411]: Found vda9 Feb 13 15:35:09.263516 extend-filesystems[1411]: Checking size of /dev/vda9 Feb 13 15:35:09.276070 dbus-daemon[1409]: [system] SELinux support is enabled Feb 13 15:35:09.265179 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:35:09.265613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:35:09.266702 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:35:09.277713 jq[1426]: true Feb 13 15:35:09.269141 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:35:09.277092 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:35:09.280425 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:35:09.293265 extend-filesystems[1411]: Resized partition /dev/vda9 Feb 13 15:35:09.293353 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:35:09.293508 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:35:09.293782 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:35:09.293934 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:35:09.299839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:35:09.300037 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:35:09.305942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) Feb 13 15:35:09.313914 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:35:09.314189 jq[1436]: true Feb 13 15:35:09.316756 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:35:09.317063 systemd-logind[1422]: New seat seat0. Feb 13 15:35:09.319103 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:35:09.325119 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:35:09.321300 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:35:09.332487 update_engine[1424]: I20250213 15:35:09.331779 1424 main.cc:92] Flatcar Update Engine starting Feb 13 15:35:09.339456 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:35:09.339593 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:35:09.341967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:35:09.342083 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:35:09.347692 tar[1435]: linux-arm64/helm Feb 13 15:35:09.347853 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:35:09.349221 update_engine[1424]: I20250213 15:35:09.349165 1424 update_check_scheduler.cc:74] Next update check in 10m44s Feb 13 15:35:09.349950 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:35:09.362182 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:35:09.365779 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:35:09.365779 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:35:09.365779 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:35:09.370995 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Feb 13 15:35:09.367976 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:35:09.369995 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:35:09.390432 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:35:09.399072 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:35:09.401711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:35:09.474086 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:35:09.573001 containerd[1437]: time="2025-02-13T15:35:09.571832880Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:35:09.597950 containerd[1437]: time="2025-02-13T15:35:09.597894840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599379480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599411520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599426040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599571040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599587320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599637440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599649640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599816760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599832200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599844040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:09.599944 containerd[1437]: time="2025-02-13T15:35:09.599852880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.600190 containerd[1437]: time="2025-02-13T15:35:09.599950320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.600190 containerd[1437]: time="2025-02-13T15:35:09.600130320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:09.600276 containerd[1437]: time="2025-02-13T15:35:09.600219000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:09.600276 containerd[1437]: time="2025-02-13T15:35:09.600241200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:35:09.600369 containerd[1437]: time="2025-02-13T15:35:09.600310640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:35:09.600369 containerd[1437]: time="2025-02-13T15:35:09.600346880Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:35:09.604121 containerd[1437]: time="2025-02-13T15:35:09.604089160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:35:09.604182 containerd[1437]: time="2025-02-13T15:35:09.604138600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:35:09.604182 containerd[1437]: time="2025-02-13T15:35:09.604153520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:35:09.604182 containerd[1437]: time="2025-02-13T15:35:09.604179040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:35:09.604235 containerd[1437]: time="2025-02-13T15:35:09.604193400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:35:09.604442 containerd[1437]: time="2025-02-13T15:35:09.604323440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:35:09.604581 containerd[1437]: time="2025-02-13T15:35:09.604522320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:35:09.604632 containerd[1437]: time="2025-02-13T15:35:09.604614040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:35:09.604656 containerd[1437]: time="2025-02-13T15:35:09.604633480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:35:09.604656 containerd[1437]: time="2025-02-13T15:35:09.604648920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:35:09.604695 containerd[1437]: time="2025-02-13T15:35:09.604661720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604695 containerd[1437]: time="2025-02-13T15:35:09.604674360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604695 containerd[1437]: time="2025-02-13T15:35:09.604692040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604762 containerd[1437]: time="2025-02-13T15:35:09.604705720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604762 containerd[1437]: time="2025-02-13T15:35:09.604720000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604762 containerd[1437]: time="2025-02-13T15:35:09.604733560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604762 containerd[1437]: time="2025-02-13T15:35:09.604754880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604823 containerd[1437]: time="2025-02-13T15:35:09.604765440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:35:09.604823 containerd[1437]: time="2025-02-13T15:35:09.604784040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604823 containerd[1437]: time="2025-02-13T15:35:09.604796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604823 containerd[1437]: time="2025-02-13T15:35:09.604808600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604823 containerd[1437]: time="2025-02-13T15:35:09.604821080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604833160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604846320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604857560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604869200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604881920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604897840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.604908 containerd[1437]: time="2025-02-13T15:35:09.604909120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.604937120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.604950640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.604964560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.604983760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.604997720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605047 containerd[1437]: time="2025-02-13T15:35:09.605008960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605177120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605197320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605207080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605219320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605228760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605239840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605249600Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:35:09.605254 containerd[1437]: time="2025-02-13T15:35:09.605259200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:35:09.605590 containerd[1437]: time="2025-02-13T15:35:09.605508400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:35:09.605590 containerd[1437]: time="2025-02-13T15:35:09.605550800Z" level=info msg="Connect containerd service" Feb 13 15:35:09.605590 containerd[1437]: time="2025-02-13T15:35:09.605580240Z" level=info msg="using legacy CRI server" Feb 13 15:35:09.605590 containerd[1437]: time="2025-02-13T15:35:09.605586680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:35:09.605831 containerd[1437]: time="2025-02-13T15:35:09.605809920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:35:09.608277 containerd[1437]: time="2025-02-13T15:35:09.608246360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608446240Z" level=info msg="Start subscribing containerd event" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608485080Z" level=info msg="Start recovering state" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608543640Z" level=info msg="Start event monitor" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608553040Z" level=info msg="Start snapshots syncer" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608561320Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:35:09.608472 containerd[1437]: time="2025-02-13T15:35:09.608568120Z" level=info msg="Start streaming server" Feb 13 15:35:09.610535 containerd[1437]: time="2025-02-13T15:35:09.610382760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:35:09.610535 containerd[1437]: time="2025-02-13T15:35:09.610448840Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:35:09.610535 containerd[1437]: time="2025-02-13T15:35:09.610512920Z" level=info msg="containerd successfully booted in 0.040020s" Feb 13 15:35:09.610604 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:35:09.701216 tar[1435]: linux-arm64/LICENSE Feb 13 15:35:09.701216 tar[1435]: linux-arm64/README.md Feb 13 15:35:09.713976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:35:09.751888 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:35:09.770027 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:35:09.784190 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:35:09.789883 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:35:09.791969 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:35:09.794689 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:35:09.808770 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:35:09.821323 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:35:09.823469 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:35:09.824629 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:35:10.986080 systemd-networkd[1375]: eth0: Gained IPv6LL Feb 13 15:35:10.988388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:35:10.989803 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:35:11.000180 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:35:11.002360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:11.004241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:35:11.019415 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:35:11.020417 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:35:11.021780 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:35:11.023490 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:35:11.471982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:11.473160 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:35:11.475639 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:11.476023 systemd[1]: Startup finished in 544ms (kernel) + 4.839s (initrd) + 3.970s (userspace) = 9.354s. Feb 13 15:35:11.486292 agetty[1499]: failed to open credentials directory Feb 13 15:35:11.486338 agetty[1500]: failed to open credentials directory Feb 13 15:35:11.926146 kubelet[1523]: E0213 15:35:11.926104 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:11.928164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:11.928292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:15.085685 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:35:15.086803 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:46290.service - OpenSSH per-connection server daemon (10.0.0.1:46290). Feb 13 15:35:15.139801 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 46290 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.141652 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.156331 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:35:15.165197 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:35:15.166970 systemd-logind[1422]: New session 1 of user core. Feb 13 15:35:15.173800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:35:15.176160 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:35:15.182376 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:35:15.251336 systemd[1542]: Queued start job for default target default.target. Feb 13 15:35:15.263804 systemd[1542]: Created slice app.slice - User Application Slice. Feb 13 15:35:15.263845 systemd[1542]: Reached target paths.target - Paths. Feb 13 15:35:15.263857 systemd[1542]: Reached target timers.target - Timers. Feb 13 15:35:15.265026 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:35:15.274365 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:35:15.274421 systemd[1542]: Reached target sockets.target - Sockets. Feb 13 15:35:15.274432 systemd[1542]: Reached target basic.target - Basic System. Feb 13 15:35:15.274465 systemd[1542]: Reached target default.target - Main User Target. Feb 13 15:35:15.274489 systemd[1542]: Startup finished in 87ms. Feb 13 15:35:15.274757 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:35:15.282055 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:35:15.340241 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:46294.service - OpenSSH per-connection server daemon (10.0.0.1:46294). Feb 13 15:35:15.377129 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 46294 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.378317 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.382784 systemd-logind[1422]: New session 2 of user core. Feb 13 15:35:15.391094 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:35:15.442583 sshd[1555]: Connection closed by 10.0.0.1 port 46294 Feb 13 15:35:15.443055 sshd-session[1553]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:15.454096 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:46294.service: Deactivated successfully. Feb 13 15:35:15.455459 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:35:15.458068 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:35:15.471177 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:46306.service - OpenSSH per-connection server daemon (10.0.0.1:46306). Feb 13 15:35:15.471872 systemd-logind[1422]: Removed session 2. Feb 13 15:35:15.504644 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 46306 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.505688 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.509084 systemd-logind[1422]: New session 3 of user core. Feb 13 15:35:15.519123 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:35:15.565703 sshd[1562]: Connection closed by 10.0.0.1 port 46306 Feb 13 15:35:15.566094 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:15.576161 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:46306.service: Deactivated successfully. Feb 13 15:35:15.577468 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:35:15.578612 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:35:15.579635 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:46310.service - OpenSSH per-connection server daemon (10.0.0.1:46310). Feb 13 15:35:15.580352 systemd-logind[1422]: Removed session 3. Feb 13 15:35:15.616632 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 46310 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.618080 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.621580 systemd-logind[1422]: New session 4 of user core. Feb 13 15:35:15.630070 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:35:15.681098 sshd[1569]: Connection closed by 10.0.0.1 port 46310 Feb 13 15:35:15.681514 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:15.690028 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:46310.service: Deactivated successfully. Feb 13 15:35:15.691320 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:35:15.692363 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:35:15.693968 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:46312.service - OpenSSH per-connection server daemon (10.0.0.1:46312). Feb 13 15:35:15.694709 systemd-logind[1422]: Removed session 4. Feb 13 15:35:15.731322 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 46312 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.732508 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.736477 systemd-logind[1422]: New session 5 of user core. Feb 13 15:35:15.742140 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:35:15.803460 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:35:15.803720 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:35:15.818736 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:15.820737 sshd[1576]: Connection closed by 10.0.0.1 port 46312 Feb 13 15:35:15.820588 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:15.831244 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:46312.service: Deactivated successfully. Feb 13 15:35:15.832560 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:35:15.835200 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:35:15.847837 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:46316.service - OpenSSH per-connection server daemon (10.0.0.1:46316). Feb 13 15:35:15.848666 systemd-logind[1422]: Removed session 5. Feb 13 15:35:15.883883 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 46316 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:15.886142 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:15.890509 systemd-logind[1422]: New session 6 of user core. Feb 13 15:35:15.901101 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:35:15.952532 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:35:15.952809 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:35:15.955882 sudo[1586]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:15.960594 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:35:15.960869 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:35:15.981228 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:35:16.004857 augenrules[1608]: No rules Feb 13 15:35:16.005580 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:35:16.005786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:35:16.007148 sudo[1585]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:16.008939 sshd[1584]: Connection closed by 10.0.0.1 port 46316 Feb 13 15:35:16.009570 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:16.017569 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:46316.service: Deactivated successfully. Feb 13 15:35:16.019102 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:35:16.021321 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:35:16.031183 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:46326.service - OpenSSH per-connection server daemon (10.0.0.1:46326). Feb 13 15:35:16.032017 systemd-logind[1422]: Removed session 6. Feb 13 15:35:16.067893 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 46326 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:35:16.069196 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:16.073211 systemd-logind[1422]: New session 7 of user core. Feb 13 15:35:16.089087 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:35:16.138791 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:35:16.139417 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:35:16.465245 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:35:16.465314 (dockerd)[1640]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:35:16.727607 dockerd[1640]: time="2025-02-13T15:35:16.726954875Z" level=info msg="Starting up" Feb 13 15:35:16.894537 dockerd[1640]: time="2025-02-13T15:35:16.894500460Z" level=info msg="Loading containers: start." Feb 13 15:35:17.042936 kernel: Initializing XFRM netlink socket Feb 13 15:35:17.117183 systemd-networkd[1375]: docker0: Link UP Feb 13 15:35:17.159406 dockerd[1640]: time="2025-02-13T15:35:17.159307473Z" level=info msg="Loading containers: done." Feb 13 15:35:17.181559 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck418645651-merged.mount: Deactivated successfully. Feb 13 15:35:17.183964 dockerd[1640]: time="2025-02-13T15:35:17.183902917Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:35:17.184079 dockerd[1640]: time="2025-02-13T15:35:17.184024151Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:35:17.184269 dockerd[1640]: time="2025-02-13T15:35:17.184238519Z" level=info msg="Daemon has completed initialization" Feb 13 15:35:17.225219 dockerd[1640]: time="2025-02-13T15:35:17.225059019Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:35:17.225436 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:35:17.962410 containerd[1437]: time="2025-02-13T15:35:17.962299571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:35:18.600268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076922898.mount: Deactivated successfully. Feb 13 15:35:19.643506 containerd[1437]: time="2025-02-13T15:35:19.643448172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:19.644526 containerd[1437]: time="2025-02-13T15:35:19.644494167Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 15:35:19.645692 containerd[1437]: time="2025-02-13T15:35:19.645389316Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:19.648587 containerd[1437]: time="2025-02-13T15:35:19.648537427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:19.649983 containerd[1437]: time="2025-02-13T15:35:19.649953458Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.687613185s" Feb 13 15:35:19.650048 containerd[1437]: time="2025-02-13T15:35:19.649989014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:35:19.668090 containerd[1437]: time="2025-02-13T15:35:19.668055146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:35:20.929294 containerd[1437]: time="2025-02-13T15:35:20.929225528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.930914 containerd[1437]: time="2025-02-13T15:35:20.930859230Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 15:35:20.932147 containerd[1437]: time="2025-02-13T15:35:20.932103002Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.935744 containerd[1437]: time="2025-02-13T15:35:20.935683733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.937699 containerd[1437]: time="2025-02-13T15:35:20.937153581Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.269060494s" Feb 13 15:35:20.937699 containerd[1437]: time="2025-02-13T15:35:20.937187366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:35:20.957039 containerd[1437]: time="2025-02-13T15:35:20.956799898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:35:21.830020 containerd[1437]: time="2025-02-13T15:35:21.829927385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:21.831465 containerd[1437]: time="2025-02-13T15:35:21.830484115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 15:35:21.831465 containerd[1437]: time="2025-02-13T15:35:21.831406354Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:21.834820 containerd[1437]: time="2025-02-13T15:35:21.834762991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:21.835910 containerd[1437]: time="2025-02-13T15:35:21.835818910Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 878.978522ms" Feb 13 15:35:21.835910 containerd[1437]: time="2025-02-13T15:35:21.835852697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:35:21.854717 containerd[1437]: time="2025-02-13T15:35:21.854679873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:35:22.006128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:22.015106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:22.122558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:22.126362 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:22.168314 kubelet[1931]: E0213 15:35:22.168221 1931 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:22.171473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:22.171609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:22.908708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419468765.mount: Deactivated successfully. Feb 13 15:35:23.120225 containerd[1437]: time="2025-02-13T15:35:23.120173871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:23.121257 containerd[1437]: time="2025-02-13T15:35:23.121190413Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 15:35:23.122939 containerd[1437]: time="2025-02-13T15:35:23.122895191Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:23.124931 containerd[1437]: time="2025-02-13T15:35:23.124880034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:23.125528 containerd[1437]: time="2025-02-13T15:35:23.125503064Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.270782682s" Feb 13 15:35:23.125792 containerd[1437]: time="2025-02-13T15:35:23.125610233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:35:23.142947 containerd[1437]: time="2025-02-13T15:35:23.142925807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:35:23.763068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915652742.mount: Deactivated successfully. Feb 13 15:35:24.433198 containerd[1437]: time="2025-02-13T15:35:24.433151035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:24.433563 containerd[1437]: time="2025-02-13T15:35:24.433520852Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:35:24.434441 containerd[1437]: time="2025-02-13T15:35:24.434386038Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:24.437227 containerd[1437]: time="2025-02-13T15:35:24.437198339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:24.438286 containerd[1437]: time="2025-02-13T15:35:24.438253146Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.29530039s" Feb 13 15:35:24.438336 containerd[1437]: time="2025-02-13T15:35:24.438285597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:35:24.455655 containerd[1437]: time="2025-02-13T15:35:24.455627371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:35:25.010270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160099083.mount: Deactivated successfully. Feb 13 15:35:25.014902 containerd[1437]: time="2025-02-13T15:35:25.014177191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.015538 containerd[1437]: time="2025-02-13T15:35:25.015494436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:35:25.016432 containerd[1437]: time="2025-02-13T15:35:25.016392246Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.018305 containerd[1437]: time="2025-02-13T15:35:25.018241833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.019147 containerd[1437]: time="2025-02-13T15:35:25.019037800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 563.37654ms" Feb 13 15:35:25.019147 containerd[1437]: time="2025-02-13T15:35:25.019064425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:35:25.036494 containerd[1437]: time="2025-02-13T15:35:25.036467316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:35:25.542541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954180346.mount: Deactivated successfully. Feb 13 15:35:27.073992 containerd[1437]: time="2025-02-13T15:35:27.073944867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.075026 containerd[1437]: time="2025-02-13T15:35:27.074385001Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 15:35:27.075942 containerd[1437]: time="2025-02-13T15:35:27.075465374Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.078343 containerd[1437]: time="2025-02-13T15:35:27.078307959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.079672 containerd[1437]: time="2025-02-13T15:35:27.079528748Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.043032091s" Feb 13 15:35:27.079672 containerd[1437]: time="2025-02-13T15:35:27.079570546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:35:32.256125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:35:32.266098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:32.351784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:32.355740 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:32.393524 kubelet[2145]: E0213 15:35:32.393430 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:32.395405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:32.395543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:33.262594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:33.275173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:33.292871 systemd[1]: Reloading requested from client PID 2161 ('systemctl') (unit session-7.scope)... Feb 13 15:35:33.292891 systemd[1]: Reloading... Feb 13 15:35:33.360950 zram_generator::config[2198]: No configuration found. Feb 13 15:35:33.537963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:33.591567 systemd[1]: Reloading finished in 298 ms. Feb 13 15:35:33.645574 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:35:33.645637 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:35:33.645858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:33.648064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:33.747799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:33.751496 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:33.791987 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:33.791987 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:33.791987 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:33.792840 kubelet[2246]: I0213 15:35:33.792790 2246 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:34.658405 kubelet[2246]: I0213 15:35:34.658347 2246 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:35:34.658405 kubelet[2246]: I0213 15:35:34.658389 2246 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:34.658621 kubelet[2246]: I0213 15:35:34.658605 2246 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:35:34.704324 kubelet[2246]: I0213 15:35:34.704207 2246 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:34.704667 kubelet[2246]: E0213 15:35:34.704647 2246 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.716610 kubelet[2246]: I0213 15:35:34.716575 2246 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:34.717773 kubelet[2246]: I0213 15:35:34.717718 2246 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:34.717981 kubelet[2246]: I0213 15:35:34.717767 2246 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:34.718101 kubelet[2246]: I0213 15:35:34.718089 2246 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:34.718127 kubelet[2246]: I0213 15:35:34.718102 2246 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:34.718429 kubelet[2246]: I0213 15:35:34.718402 2246 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:34.723084 kubelet[2246]: I0213 15:35:34.722878 2246 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:35:34.723084 kubelet[2246]: I0213 15:35:34.722902 2246 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:34.723259 kubelet[2246]: I0213 15:35:34.723250 2246 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:34.723445 kubelet[2246]: I0213 15:35:34.723426 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:34.724052 kubelet[2246]: W0213 15:35:34.723750 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.724052 kubelet[2246]: E0213 15:35:34.723811 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.724052 kubelet[2246]: W0213 15:35:34.723980 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.724052 kubelet[2246]: E0213 15:35:34.724025 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.728449 kubelet[2246]: I0213 15:35:34.728403 2246 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:34.729057 kubelet[2246]: I0213 15:35:34.728982 2246 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:34.729116 kubelet[2246]: W0213 15:35:34.729101 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:35:34.730199 kubelet[2246]: I0213 15:35:34.730169 2246 server.go:1264] "Started kubelet" Feb 13 15:35:34.730441 kubelet[2246]: I0213 15:35:34.730410 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:34.731646 kubelet[2246]: I0213 15:35:34.731620 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:34.732679 kubelet[2246]: I0213 15:35:34.732234 2246 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:35:34.733125 kubelet[2246]: I0213 15:35:34.733066 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:34.733288 kubelet[2246]: I0213 15:35:34.733263 2246 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:34.733779 kubelet[2246]: E0213 15:35:34.733269 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce810b4248de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:35:34.73013987 +0000 UTC m=+0.975660512,LastTimestamp:2025-02-13 15:35:34.73013987 +0000 UTC m=+0.975660512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:35:34.739714 kubelet[2246]: E0213 15:35:34.736323 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:34.739714 kubelet[2246]: I0213 15:35:34.736596 2246 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:34.739714 kubelet[2246]: I0213 15:35:34.736702 2246 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:35:34.739714 kubelet[2246]: I0213 15:35:34.737725 2246 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:35:34.739714 kubelet[2246]: W0213 15:35:34.738335 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.739714 kubelet[2246]: E0213 15:35:34.738400 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.739714 kubelet[2246]: E0213 15:35:34.738479 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Feb 13 15:35:34.739714 kubelet[2246]: I0213 15:35:34.738666 2246 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:34.739714 kubelet[2246]: I0213 15:35:34.738769 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:34.740672 kubelet[2246]: I0213 15:35:34.740624 2246 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:34.741520 kubelet[2246]: E0213 15:35:34.741499 2246 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:35:34.748800 kubelet[2246]: I0213 15:35:34.748736 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:34.750314 kubelet[2246]: I0213 15:35:34.750010 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:34.750314 kubelet[2246]: I0213 15:35:34.750168 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:34.750314 kubelet[2246]: I0213 15:35:34.750190 2246 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:35:34.750314 kubelet[2246]: E0213 15:35:34.750232 2246 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:34.753572 kubelet[2246]: W0213 15:35:34.753522 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.754372 kubelet[2246]: E0213 15:35:34.754349 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:34.757785 kubelet[2246]: I0213 15:35:34.757763 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:34.757987 kubelet[2246]: I0213 15:35:34.757781 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:34.758123 kubelet[2246]: I0213 15:35:34.758111 2246 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:34.818960 kubelet[2246]: I0213 15:35:34.818852 2246 policy_none.go:49] "None policy: Start" Feb 13 15:35:34.819835 kubelet[2246]: I0213 15:35:34.819807 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:34.819835 kubelet[2246]: I0213 15:35:34.819839 2246 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:34.825573 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:35:34.838708 kubelet[2246]: I0213 15:35:34.838657 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:34.839040 kubelet[2246]: E0213 15:35:34.839017 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Feb 13 15:35:34.842049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:35:34.844541 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:35:34.850522 kubelet[2246]: E0213 15:35:34.850491 2246 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:35:34.861254 kubelet[2246]: I0213 15:35:34.860706 2246 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:34.861254 kubelet[2246]: I0213 15:35:34.860893 2246 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:35:34.861254 kubelet[2246]: I0213 15:35:34.861056 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:34.862265 kubelet[2246]: E0213 15:35:34.862233 2246 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:35:34.939861 kubelet[2246]: E0213 15:35:34.939739 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Feb 13 15:35:35.040161 kubelet[2246]: I0213 15:35:35.040119 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:35.040438 kubelet[2246]: E0213 15:35:35.040411 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Feb 13 15:35:35.051629 kubelet[2246]: I0213 15:35:35.051572 2246 topology_manager.go:215] "Topology Admit Handler" podUID="2bdf16eb524140b4161947f87f92387d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:35:35.052613 kubelet[2246]: I0213 15:35:35.052590 2246 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:35:35.053443 kubelet[2246]: I0213 15:35:35.053412 2246 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:35:35.059499 systemd[1]: Created slice kubepods-burstable-pod2bdf16eb524140b4161947f87f92387d.slice - libcontainer container kubepods-burstable-pod2bdf16eb524140b4161947f87f92387d.slice. Feb 13 15:35:35.080752 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:35:35.083819 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:35:35.139651 kubelet[2246]: I0213 15:35:35.139607 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:35.139651 kubelet[2246]: I0213 15:35:35.139648 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:35.139788 kubelet[2246]: I0213 15:35:35.139673 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:35.139788 kubelet[2246]: I0213 15:35:35.139700 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:35.139788 kubelet[2246]: I0213 15:35:35.139714 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:35.139788 kubelet[2246]: I0213 15:35:35.139735 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:35.139788 kubelet[2246]: I0213 15:35:35.139750 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:35.139900 kubelet[2246]: I0213 15:35:35.139764 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:35:35.139900 kubelet[2246]: I0213 15:35:35.139780 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:35.341191 kubelet[2246]: E0213 15:35:35.341144 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Feb 13 15:35:35.378535 kubelet[2246]: E0213 15:35:35.378446 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:35.379222 containerd[1437]: time="2025-02-13T15:35:35.379175596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2bdf16eb524140b4161947f87f92387d,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:35.383696 kubelet[2246]: E0213 15:35:35.383660 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:35.384103 containerd[1437]: time="2025-02-13T15:35:35.384074330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:35.386334 kubelet[2246]: E0213 15:35:35.386305 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:35.386851 containerd[1437]: time="2025-02-13T15:35:35.386630894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:35.442388 kubelet[2246]: I0213 15:35:35.442347 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:35.442746 kubelet[2246]: E0213 15:35:35.442709 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Feb 13 15:35:35.568538 kubelet[2246]: W0213 15:35:35.568466 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.568538 kubelet[2246]: E0213 15:35:35.568538 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.587277 kubelet[2246]: W0213 15:35:35.587210 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.587321 kubelet[2246]: E0213 15:35:35.587291 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.868683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976897444.mount: Deactivated successfully. Feb 13 15:35:35.872072 containerd[1437]: time="2025-02-13T15:35:35.872031877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:35.873317 containerd[1437]: time="2025-02-13T15:35:35.873274626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:35:35.875349 containerd[1437]: time="2025-02-13T15:35:35.875297020Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:35.876545 containerd[1437]: time="2025-02-13T15:35:35.876502379Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:35.877030 containerd[1437]: time="2025-02-13T15:35:35.876995484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:35.877988 containerd[1437]: time="2025-02-13T15:35:35.877942227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:35.879409 containerd[1437]: time="2025-02-13T15:35:35.879201035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:35.879409 containerd[1437]: time="2025-02-13T15:35:35.879316082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:35.881962 containerd[1437]: time="2025-02-13T15:35:35.880908886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.219151ms" Feb 13 15:35:35.882690 containerd[1437]: time="2025-02-13T15:35:35.882484753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.22415ms" Feb 13 15:35:35.887176 containerd[1437]: time="2025-02-13T15:35:35.887028279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.886199ms" Feb 13 15:35:35.920774 kubelet[2246]: W0213 15:35:35.920712 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.920774 kubelet[2246]: E0213 15:35:35.920769 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.998361 kubelet[2246]: W0213 15:35:35.998295 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:35.998361 kubelet[2246]: E0213 15:35:35.998370 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004219377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004263846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004279587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004165120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004249942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004307834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.004416 containerd[1437]: time="2025-02-13T15:35:36.004371241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.004912 containerd[1437]: time="2025-02-13T15:35:36.004732901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.007065 containerd[1437]: time="2025-02-13T15:35:36.006513831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:36.007065 containerd[1437]: time="2025-02-13T15:35:36.006561456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:36.007065 containerd[1437]: time="2025-02-13T15:35:36.006571964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.007065 containerd[1437]: time="2025-02-13T15:35:36.006630456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:36.030085 systemd[1]: Started cri-containerd-1c0f8ec1c55439cba4ecfb934a5e547fdc9f9e4e449aa01ecc3a54daa84e2a07.scope - libcontainer container 1c0f8ec1c55439cba4ecfb934a5e547fdc9f9e4e449aa01ecc3a54daa84e2a07. Feb 13 15:35:36.031314 systemd[1]: Started cri-containerd-429358d96ac4724f8991ed2513e899e1eb71b952f08d528b523aca6b0346249e.scope - libcontainer container 429358d96ac4724f8991ed2513e899e1eb71b952f08d528b523aca6b0346249e. Feb 13 15:35:36.032276 systemd[1]: Started cri-containerd-8b653af8a9c70cb113ff48d1275d8dfc3852ca5652e1bfd2b76bb626c2678a6e.scope - libcontainer container 8b653af8a9c70cb113ff48d1275d8dfc3852ca5652e1bfd2b76bb626c2678a6e. Feb 13 15:35:36.067204 containerd[1437]: time="2025-02-13T15:35:36.064386665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2bdf16eb524140b4161947f87f92387d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c0f8ec1c55439cba4ecfb934a5e547fdc9f9e4e449aa01ecc3a54daa84e2a07\"" Feb 13 15:35:36.067204 containerd[1437]: time="2025-02-13T15:35:36.064468890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"429358d96ac4724f8991ed2513e899e1eb71b952f08d528b523aca6b0346249e\"" Feb 13 15:35:36.069442 containerd[1437]: time="2025-02-13T15:35:36.069377546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b653af8a9c70cb113ff48d1275d8dfc3852ca5652e1bfd2b76bb626c2678a6e\"" Feb 13 15:35:36.070767 kubelet[2246]: E0213 15:35:36.070718 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.070845 kubelet[2246]: E0213 15:35:36.070773 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.070845 kubelet[2246]: E0213 15:35:36.070718 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.073503 containerd[1437]: time="2025-02-13T15:35:36.073471509Z" level=info msg="CreateContainer within sandbox \"1c0f8ec1c55439cba4ecfb934a5e547fdc9f9e4e449aa01ecc3a54daa84e2a07\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:35:36.073805 containerd[1437]: time="2025-02-13T15:35:36.073674513Z" level=info msg="CreateContainer within sandbox \"8b653af8a9c70cb113ff48d1275d8dfc3852ca5652e1bfd2b76bb626c2678a6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:35:36.073963 containerd[1437]: time="2025-02-13T15:35:36.073687578Z" level=info msg="CreateContainer within sandbox \"429358d96ac4724f8991ed2513e899e1eb71b952f08d528b523aca6b0346249e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:35:36.094710 containerd[1437]: time="2025-02-13T15:35:36.094638514Z" level=info msg="CreateContainer within sandbox \"429358d96ac4724f8991ed2513e899e1eb71b952f08d528b523aca6b0346249e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de3ff5719a3e237fbe48bd217ad390b3798aaac70ed843aef23d687bf08bbbbc\"" Feb 13 15:35:36.095567 containerd[1437]: time="2025-02-13T15:35:36.095330030Z" level=info msg="StartContainer for \"de3ff5719a3e237fbe48bd217ad390b3798aaac70ed843aef23d687bf08bbbbc\"" Feb 13 15:35:36.097833 containerd[1437]: time="2025-02-13T15:35:36.097714460Z" level=info msg="CreateContainer within sandbox \"1c0f8ec1c55439cba4ecfb934a5e547fdc9f9e4e449aa01ecc3a54daa84e2a07\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2217fb584b9b14bcb22d7b8a61f1775e5ba87c5fc8856ae49e2c52c4bd60a488\"" Feb 13 15:35:36.098259 containerd[1437]: time="2025-02-13T15:35:36.098210723Z" level=info msg="StartContainer for \"2217fb584b9b14bcb22d7b8a61f1775e5ba87c5fc8856ae49e2c52c4bd60a488\"" Feb 13 15:35:36.099928 containerd[1437]: time="2025-02-13T15:35:36.099890811Z" level=info msg="CreateContainer within sandbox \"8b653af8a9c70cb113ff48d1275d8dfc3852ca5652e1bfd2b76bb626c2678a6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"997e049a6f689cf5859e199eaf2762e972793dfcee378bf579b7b1ebba104344\"" Feb 13 15:35:36.100287 containerd[1437]: time="2025-02-13T15:35:36.100259462Z" level=info msg="StartContainer for \"997e049a6f689cf5859e199eaf2762e972793dfcee378bf579b7b1ebba104344\"" Feb 13 15:35:36.126094 systemd[1]: Started cri-containerd-2217fb584b9b14bcb22d7b8a61f1775e5ba87c5fc8856ae49e2c52c4bd60a488.scope - libcontainer container 2217fb584b9b14bcb22d7b8a61f1775e5ba87c5fc8856ae49e2c52c4bd60a488. Feb 13 15:35:36.127517 systemd[1]: Started cri-containerd-997e049a6f689cf5859e199eaf2762e972793dfcee378bf579b7b1ebba104344.scope - libcontainer container 997e049a6f689cf5859e199eaf2762e972793dfcee378bf579b7b1ebba104344. Feb 13 15:35:36.128764 systemd[1]: Started cri-containerd-de3ff5719a3e237fbe48bd217ad390b3798aaac70ed843aef23d687bf08bbbbc.scope - libcontainer container de3ff5719a3e237fbe48bd217ad390b3798aaac70ed843aef23d687bf08bbbbc. Feb 13 15:35:36.141988 kubelet[2246]: E0213 15:35:36.141944 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Feb 13 15:35:36.180671 containerd[1437]: time="2025-02-13T15:35:36.180632872Z" level=info msg="StartContainer for \"de3ff5719a3e237fbe48bd217ad390b3798aaac70ed843aef23d687bf08bbbbc\" returns successfully" Feb 13 15:35:36.181241 containerd[1437]: time="2025-02-13T15:35:36.180897404Z" level=info msg="StartContainer for \"997e049a6f689cf5859e199eaf2762e972793dfcee378bf579b7b1ebba104344\" returns successfully" Feb 13 15:35:36.187629 containerd[1437]: time="2025-02-13T15:35:36.187537369Z" level=info msg="StartContainer for \"2217fb584b9b14bcb22d7b8a61f1775e5ba87c5fc8856ae49e2c52c4bd60a488\" returns successfully" Feb 13 15:35:36.247128 kubelet[2246]: I0213 15:35:36.246161 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:36.247128 kubelet[2246]: E0213 15:35:36.246486 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Feb 13 15:35:36.763437 kubelet[2246]: E0213 15:35:36.763404 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.768180 kubelet[2246]: E0213 15:35:36.768151 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.768313 kubelet[2246]: E0213 15:35:36.768295 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:37.771122 kubelet[2246]: E0213 15:35:37.771047 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:37.771122 kubelet[2246]: E0213 15:35:37.771125 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:37.848247 kubelet[2246]: I0213 15:35:37.848215 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:37.976713 kubelet[2246]: E0213 15:35:37.976661 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:35:38.043889 kubelet[2246]: I0213 15:35:38.043416 2246 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:35:38.053858 kubelet[2246]: E0213 15:35:38.053815 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:38.154011 kubelet[2246]: E0213 15:35:38.153974 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:38.254806 kubelet[2246]: E0213 15:35:38.254768 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:38.355663 kubelet[2246]: E0213 15:35:38.355282 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:38.726035 kubelet[2246]: I0213 15:35:38.724939 2246 apiserver.go:52] "Watching apiserver" Feb 13 15:35:38.737489 kubelet[2246]: I0213 15:35:38.737455 2246 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:35:40.022256 kubelet[2246]: E0213 15:35:40.022205 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:40.210715 systemd[1]: Reloading requested from client PID 2522 ('systemctl') (unit session-7.scope)... Feb 13 15:35:40.210732 systemd[1]: Reloading... Feb 13 15:35:40.280967 zram_generator::config[2564]: No configuration found. Feb 13 15:35:40.364006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:40.429318 systemd[1]: Reloading finished in 218 ms. Feb 13 15:35:40.463156 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:40.463310 kubelet[2246]: I0213 15:35:40.463146 2246 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:40.473768 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:35:40.474751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:40.474809 systemd[1]: kubelet.service: Consumed 1.359s CPU time, 114.5M memory peak, 0B memory swap peak. Feb 13 15:35:40.484295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:40.571723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:40.575619 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:40.624791 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:40.624791 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:40.624791 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:40.624791 kubelet[2603]: I0213 15:35:40.624755 2603 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:40.630068 kubelet[2603]: I0213 15:35:40.630032 2603 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:35:40.630068 kubelet[2603]: I0213 15:35:40.630059 2603 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:40.630243 kubelet[2603]: I0213 15:35:40.630228 2603 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:35:40.631548 kubelet[2603]: I0213 15:35:40.631525 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:35:40.632725 kubelet[2603]: I0213 15:35:40.632702 2603 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:40.638169 kubelet[2603]: I0213 15:35:40.638142 2603 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:40.638562 kubelet[2603]: I0213 15:35:40.638527 2603 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:40.638801 kubelet[2603]: I0213 15:35:40.638594 2603 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:40.638868 kubelet[2603]: I0213 15:35:40.638805 2603 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:40.638868 kubelet[2603]: I0213 15:35:40.638815 2603 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:40.638868 kubelet[2603]: I0213 15:35:40.638864 2603 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:40.639048 kubelet[2603]: I0213 15:35:40.639012 2603 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:35:40.639048 kubelet[2603]: I0213 15:35:40.639027 2603 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:40.639780 kubelet[2603]: I0213 15:35:40.639748 2603 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:40.639780 kubelet[2603]: I0213 15:35:40.639780 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:40.640519 kubelet[2603]: I0213 15:35:40.640500 2603 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:40.640708 kubelet[2603]: I0213 15:35:40.640687 2603 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:40.641113 kubelet[2603]: I0213 15:35:40.641087 2603 server.go:1264] "Started kubelet" Feb 13 15:35:40.641843 kubelet[2603]: I0213 15:35:40.641709 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:40.642034 kubelet[2603]: I0213 15:35:40.642001 2603 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:40.647087 kubelet[2603]: I0213 15:35:40.647049 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:40.648417 kubelet[2603]: I0213 15:35:40.648375 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:40.650963 kubelet[2603]: I0213 15:35:40.649255 2603 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:40.650963 kubelet[2603]: I0213 15:35:40.649566 2603 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:35:40.650963 kubelet[2603]: I0213 15:35:40.650169 2603 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:35:40.650963 kubelet[2603]: I0213 15:35:40.650427 2603 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:35:40.652579 kubelet[2603]: I0213 15:35:40.652549 2603 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:40.652681 kubelet[2603]: I0213 15:35:40.652652 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:40.664712 kubelet[2603]: I0213 15:35:40.664681 2603 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:40.666741 kubelet[2603]: E0213 15:35:40.666343 2603 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:35:40.670576 kubelet[2603]: I0213 15:35:40.670542 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:40.672734 kubelet[2603]: I0213 15:35:40.672707 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:40.672857 kubelet[2603]: I0213 15:35:40.672847 2603 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:40.672940 kubelet[2603]: I0213 15:35:40.672909 2603 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:35:40.673905 kubelet[2603]: E0213 15:35:40.673876 2603 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:40.697310 kubelet[2603]: I0213 15:35:40.697287 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:40.697503 kubelet[2603]: I0213 15:35:40.697487 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:40.697576 kubelet[2603]: I0213 15:35:40.697567 2603 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:40.697765 kubelet[2603]: I0213 15:35:40.697750 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:35:40.697844 kubelet[2603]: I0213 15:35:40.697820 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:35:40.697887 kubelet[2603]: I0213 15:35:40.697880 2603 policy_none.go:49] "None policy: Start" Feb 13 15:35:40.698498 kubelet[2603]: I0213 15:35:40.698482 2603 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:40.698638 kubelet[2603]: I0213 15:35:40.698626 2603 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:40.698842 kubelet[2603]: I0213 15:35:40.698828 2603 state_mem.go:75] "Updated machine memory state" Feb 13 15:35:40.702382 kubelet[2603]: I0213 15:35:40.702352 2603 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:40.702699 kubelet[2603]: I0213 15:35:40.702500 2603 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:35:40.702699 kubelet[2603]: I0213 15:35:40.702605 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:40.753843 kubelet[2603]: I0213 15:35:40.753817 2603 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:40.759931 kubelet[2603]: I0213 15:35:40.759724 2603 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:35:40.759931 kubelet[2603]: I0213 15:35:40.759802 2603 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:35:40.774520 kubelet[2603]: I0213 15:35:40.774468 2603 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:35:40.774648 kubelet[2603]: I0213 15:35:40.774579 2603 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:35:40.774648 kubelet[2603]: I0213 15:35:40.774613 2603 topology_manager.go:215] "Topology Admit Handler" podUID="2bdf16eb524140b4161947f87f92387d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:35:40.781304 kubelet[2603]: E0213 15:35:40.781220 2603 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:40.850677 kubelet[2603]: I0213 15:35:40.850625 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:40.850677 kubelet[2603]: I0213 15:35:40.850673 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:40.850817 kubelet[2603]: I0213 15:35:40.850694 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:40.850817 kubelet[2603]: I0213 15:35:40.850717 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:40.850817 kubelet[2603]: I0213 15:35:40.850734 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:35:40.850817 kubelet[2603]: I0213 15:35:40.850750 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:40.850817 kubelet[2603]: I0213 15:35:40.850796 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:40.850939 kubelet[2603]: I0213 15:35:40.850837 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:40.850939 kubelet[2603]: I0213 15:35:40.850858 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bdf16eb524140b4161947f87f92387d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2bdf16eb524140b4161947f87f92387d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:41.081661 kubelet[2603]: E0213 15:35:41.081428 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.081661 kubelet[2603]: E0213 15:35:41.081453 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.081850 kubelet[2603]: E0213 15:35:41.081802 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.263149 sudo[2641]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:35:41.263411 sudo[2641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:35:41.640714 kubelet[2603]: I0213 15:35:41.640654 2603 apiserver.go:52] "Watching apiserver" Feb 13 15:35:41.651110 kubelet[2603]: I0213 15:35:41.651064 2603 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:35:41.684323 kubelet[2603]: E0213 15:35:41.683855 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.686491 kubelet[2603]: E0213 15:35:41.685626 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.689194 kubelet[2603]: E0213 15:35:41.689155 2603 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:41.690565 kubelet[2603]: E0213 15:35:41.690541 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:41.702297 sudo[2641]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:41.723026 kubelet[2603]: I0213 15:35:41.722949 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.720465582 podStartE2EDuration="1.720465582s" podCreationTimestamp="2025-02-13 15:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:41.705178993 +0000 UTC m=+1.126761918" watchObservedRunningTime="2025-02-13 15:35:41.720465582 +0000 UTC m=+1.142048507" Feb 13 15:35:41.723206 kubelet[2603]: I0213 15:35:41.723105 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.723097144 podStartE2EDuration="1.723097144s" podCreationTimestamp="2025-02-13 15:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:41.723080354 +0000 UTC m=+1.144663279" watchObservedRunningTime="2025-02-13 15:35:41.723097144 +0000 UTC m=+1.144680069" Feb 13 15:35:41.731066 kubelet[2603]: I0213 15:35:41.730983 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7309655259999999 podStartE2EDuration="1.730965526s" podCreationTimestamp="2025-02-13 15:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:41.73095765 +0000 UTC m=+1.152540575" watchObservedRunningTime="2025-02-13 15:35:41.730965526 +0000 UTC m=+1.152548451" Feb 13 15:35:42.685397 kubelet[2603]: E0213 15:35:42.684096 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:43.366262 kubelet[2603]: E0213 15:35:43.366230 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:43.686190 kubelet[2603]: E0213 15:35:43.686083 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:43.956444 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:43.957691 sshd[1618]: Connection closed by 10.0.0.1 port 46326 Feb 13 15:35:43.960672 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:43.963659 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:46326.service: Deactivated successfully. Feb 13 15:35:43.965438 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:35:43.965680 systemd[1]: session-7.scope: Consumed 9.340s CPU time, 192.2M memory peak, 0B memory swap peak. Feb 13 15:35:43.966721 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:35:43.967913 systemd-logind[1422]: Removed session 7. Feb 13 15:35:47.795103 kubelet[2603]: E0213 15:35:47.795062 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:48.691959 kubelet[2603]: E0213 15:35:48.691902 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:52.923068 kubelet[2603]: E0213 15:35:52.923038 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:53.373061 kubelet[2603]: E0213 15:35:53.372963 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:54.133099 update_engine[1424]: I20250213 15:35:54.132877 1424 update_attempter.cc:509] Updating boot flags... Feb 13 15:35:54.163956 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2690) Feb 13 15:35:54.194946 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2689) Feb 13 15:35:54.222950 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2689) Feb 13 15:35:54.466556 kubelet[2603]: I0213 15:35:54.466447 2603 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:35:54.467312 containerd[1437]: time="2025-02-13T15:35:54.467042627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:35:54.467568 kubelet[2603]: I0213 15:35:54.467300 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:35:55.371961 kubelet[2603]: I0213 15:35:55.371307 2603 topology_manager.go:215] "Topology Admit Handler" podUID="84972f27-6c42-43aa-96e8-cb12e95494ed" podNamespace="kube-system" podName="kube-proxy-cmh8l" Feb 13 15:35:55.378101 kubelet[2603]: I0213 15:35:55.377991 2603 topology_manager.go:215] "Topology Admit Handler" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" podNamespace="kube-system" podName="cilium-sf928" Feb 13 15:35:55.382660 systemd[1]: Created slice kubepods-besteffort-pod84972f27_6c42_43aa_96e8_cb12e95494ed.slice - libcontainer container kubepods-besteffort-pod84972f27_6c42_43aa_96e8_cb12e95494ed.slice. Feb 13 15:35:55.416538 systemd[1]: Created slice kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice - libcontainer container kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice. Feb 13 15:35:55.540532 kubelet[2603]: I0213 15:35:55.540473 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-kernel\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540532 kubelet[2603]: I0213 15:35:55.540528 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-hubble-tls\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540549 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84972f27-6c42-43aa-96e8-cb12e95494ed-lib-modules\") pod \"kube-proxy-cmh8l\" (UID: \"84972f27-6c42-43aa-96e8-cb12e95494ed\") " pod="kube-system/kube-proxy-cmh8l" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540570 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-lib-modules\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540585 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-run\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540602 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-hostproc\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540616 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-etc-cni-netd\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.540975 kubelet[2603]: I0213 15:35:55.540634 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86f589ff-009e-4d4b-9d1b-322288d2106e-clustermesh-secrets\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541130 kubelet[2603]: I0213 15:35:55.540650 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxskz\" (UniqueName: \"kubernetes.io/projected/84972f27-6c42-43aa-96e8-cb12e95494ed-kube-api-access-dxskz\") pod \"kube-proxy-cmh8l\" (UID: \"84972f27-6c42-43aa-96e8-cb12e95494ed\") " pod="kube-system/kube-proxy-cmh8l" Feb 13 15:35:55.541130 kubelet[2603]: I0213 15:35:55.540666 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmkz\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-kube-api-access-dqmkz\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541130 kubelet[2603]: I0213 15:35:55.540681 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-xtables-lock\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541130 kubelet[2603]: I0213 15:35:55.540696 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-net\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541130 kubelet[2603]: I0213 15:35:55.540715 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-cgroup\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541285 kubelet[2603]: I0213 15:35:55.540733 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-config-path\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541285 kubelet[2603]: I0213 15:35:55.540748 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84972f27-6c42-43aa-96e8-cb12e95494ed-xtables-lock\") pod \"kube-proxy-cmh8l\" (UID: \"84972f27-6c42-43aa-96e8-cb12e95494ed\") " pod="kube-system/kube-proxy-cmh8l" Feb 13 15:35:55.541285 kubelet[2603]: I0213 15:35:55.540763 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-bpf-maps\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541285 kubelet[2603]: I0213 15:35:55.540779 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cni-path\") pod \"cilium-sf928\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " pod="kube-system/cilium-sf928" Feb 13 15:35:55.541285 kubelet[2603]: I0213 15:35:55.540799 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84972f27-6c42-43aa-96e8-cb12e95494ed-kube-proxy\") pod \"kube-proxy-cmh8l\" (UID: \"84972f27-6c42-43aa-96e8-cb12e95494ed\") " pod="kube-system/kube-proxy-cmh8l" Feb 13 15:35:55.625601 kubelet[2603]: I0213 15:35:55.625474 2603 topology_manager.go:215] "Topology Admit Handler" podUID="580dab04-9017-43f1-8ccb-fd5463c3bd0b" podNamespace="kube-system" podName="cilium-operator-599987898-74wx8" Feb 13 15:35:55.633166 systemd[1]: Created slice kubepods-besteffort-pod580dab04_9017_43f1_8ccb_fd5463c3bd0b.slice - libcontainer container kubepods-besteffort-pod580dab04_9017_43f1_8ccb_fd5463c3bd0b.slice. Feb 13 15:35:55.716015 kubelet[2603]: E0213 15:35:55.715972 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.718944 kubelet[2603]: E0213 15:35:55.718611 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.722209 containerd[1437]: time="2025-02-13T15:35:55.722163445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sf928,Uid:86f589ff-009e-4d4b-9d1b-322288d2106e,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:55.723206 containerd[1437]: time="2025-02-13T15:35:55.722986538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmh8l,Uid:84972f27-6c42-43aa-96e8-cb12e95494ed,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:55.742464 kubelet[2603]: I0213 15:35:55.742408 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580dab04-9017-43f1-8ccb-fd5463c3bd0b-cilium-config-path\") pod \"cilium-operator-599987898-74wx8\" (UID: \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\") " pod="kube-system/cilium-operator-599987898-74wx8" Feb 13 15:35:55.742464 kubelet[2603]: I0213 15:35:55.742457 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdfg\" (UniqueName: \"kubernetes.io/projected/580dab04-9017-43f1-8ccb-fd5463c3bd0b-kube-api-access-cgdfg\") pod \"cilium-operator-599987898-74wx8\" (UID: \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\") " pod="kube-system/cilium-operator-599987898-74wx8" Feb 13 15:35:55.763088 containerd[1437]: time="2025-02-13T15:35:55.761747156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:55.763088 containerd[1437]: time="2025-02-13T15:35:55.762700091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:55.763088 containerd[1437]: time="2025-02-13T15:35:55.762716052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.763088 containerd[1437]: time="2025-02-13T15:35:55.762819613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.771543 containerd[1437]: time="2025-02-13T15:35:55.771145506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:55.771839 containerd[1437]: time="2025-02-13T15:35:55.771521672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:55.771839 containerd[1437]: time="2025-02-13T15:35:55.771541512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.771839 containerd[1437]: time="2025-02-13T15:35:55.771674195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.782146 systemd[1]: Started cri-containerd-b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326.scope - libcontainer container b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326. Feb 13 15:35:55.789046 systemd[1]: Started cri-containerd-3fe7b2bf4284fa683f52f6088380147dfbcf9372ad53002f1f312f97c1ba3355.scope - libcontainer container 3fe7b2bf4284fa683f52f6088380147dfbcf9372ad53002f1f312f97c1ba3355. Feb 13 15:35:55.807908 containerd[1437]: time="2025-02-13T15:35:55.807850571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sf928,Uid:86f589ff-009e-4d4b-9d1b-322288d2106e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\"" Feb 13 15:35:55.812508 kubelet[2603]: E0213 15:35:55.812188 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.814775 containerd[1437]: time="2025-02-13T15:35:55.814698321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:35:55.823855 containerd[1437]: time="2025-02-13T15:35:55.823750305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmh8l,Uid:84972f27-6c42-43aa-96e8-cb12e95494ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fe7b2bf4284fa683f52f6088380147dfbcf9372ad53002f1f312f97c1ba3355\"" Feb 13 15:35:55.824780 kubelet[2603]: E0213 15:35:55.824757 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.827217 containerd[1437]: time="2025-02-13T15:35:55.827176200Z" level=info msg="CreateContainer within sandbox \"3fe7b2bf4284fa683f52f6088380147dfbcf9372ad53002f1f312f97c1ba3355\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:35:55.861183 containerd[1437]: time="2025-02-13T15:35:55.860894257Z" level=info msg="CreateContainer within sandbox \"3fe7b2bf4284fa683f52f6088380147dfbcf9372ad53002f1f312f97c1ba3355\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87357392c0111dda6a1626de1068d12a0bc847d04660fe28c15ce5bc89c17540\"" Feb 13 15:35:55.863662 containerd[1437]: time="2025-02-13T15:35:55.863602701Z" level=info msg="StartContainer for \"87357392c0111dda6a1626de1068d12a0bc847d04660fe28c15ce5bc89c17540\"" Feb 13 15:35:55.893130 systemd[1]: Started cri-containerd-87357392c0111dda6a1626de1068d12a0bc847d04660fe28c15ce5bc89c17540.scope - libcontainer container 87357392c0111dda6a1626de1068d12a0bc847d04660fe28c15ce5bc89c17540. Feb 13 15:35:55.924676 containerd[1437]: time="2025-02-13T15:35:55.924617954Z" level=info msg="StartContainer for \"87357392c0111dda6a1626de1068d12a0bc847d04660fe28c15ce5bc89c17540\" returns successfully" Feb 13 15:35:55.937765 kubelet[2603]: E0213 15:35:55.936508 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.937945 containerd[1437]: time="2025-02-13T15:35:55.937399478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-74wx8,Uid:580dab04-9017-43f1-8ccb-fd5463c3bd0b,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:55.965363 containerd[1437]: time="2025-02-13T15:35:55.965233681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:55.965363 containerd[1437]: time="2025-02-13T15:35:55.965292002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:55.965363 containerd[1437]: time="2025-02-13T15:35:55.965304283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.965549 containerd[1437]: time="2025-02-13T15:35:55.965437765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:55.991147 systemd[1]: Started cri-containerd-1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5.scope - libcontainer container 1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5. Feb 13 15:35:56.031650 containerd[1437]: time="2025-02-13T15:35:56.031561316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-74wx8,Uid:580dab04-9017-43f1-8ccb-fd5463c3bd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\"" Feb 13 15:35:56.032433 kubelet[2603]: E0213 15:35:56.032402 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:56.706668 kubelet[2603]: E0213 15:35:56.705750 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:56.716040 kubelet[2603]: I0213 15:35:56.715605 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cmh8l" podStartSLOduration=1.715587136 podStartE2EDuration="1.715587136s" podCreationTimestamp="2025-02-13 15:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:56.715165129 +0000 UTC m=+16.136748094" watchObservedRunningTime="2025-02-13 15:35:56.715587136 +0000 UTC m=+16.137170021" Feb 13 15:36:01.421996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411540254.mount: Deactivated successfully. Feb 13 15:36:02.675641 containerd[1437]: time="2025-02-13T15:36:02.674831130Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:02.678078 containerd[1437]: time="2025-02-13T15:36:02.678040927Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:36:02.679484 containerd[1437]: time="2025-02-13T15:36:02.679431262Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:02.681249 containerd[1437]: time="2025-02-13T15:36:02.680973000Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.866231079s" Feb 13 15:36:02.681249 containerd[1437]: time="2025-02-13T15:36:02.681007081Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:36:02.683568 containerd[1437]: time="2025-02-13T15:36:02.683535989Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:36:02.684580 containerd[1437]: time="2025-02-13T15:36:02.684552281Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:36:02.703801 containerd[1437]: time="2025-02-13T15:36:02.703715621Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\"" Feb 13 15:36:02.704289 containerd[1437]: time="2025-02-13T15:36:02.704192266Z" level=info msg="StartContainer for \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\"" Feb 13 15:36:02.731089 systemd[1]: Started cri-containerd-00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982.scope - libcontainer container 00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982. Feb 13 15:36:02.757396 containerd[1437]: time="2025-02-13T15:36:02.757350795Z" level=info msg="StartContainer for \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\" returns successfully" Feb 13 15:36:02.791717 systemd[1]: cri-containerd-00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982.scope: Deactivated successfully. Feb 13 15:36:02.953406 containerd[1437]: time="2025-02-13T15:36:02.953263158Z" level=info msg="shim disconnected" id=00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982 namespace=k8s.io Feb 13 15:36:02.953406 containerd[1437]: time="2025-02-13T15:36:02.953325439Z" level=warning msg="cleaning up after shim disconnected" id=00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982 namespace=k8s.io Feb 13 15:36:02.953406 containerd[1437]: time="2025-02-13T15:36:02.953334199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:03.696029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982-rootfs.mount: Deactivated successfully. Feb 13 15:36:03.727061 kubelet[2603]: E0213 15:36:03.727031 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:03.729090 containerd[1437]: time="2025-02-13T15:36:03.729056124Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:36:03.747477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969112634.mount: Deactivated successfully. Feb 13 15:36:03.764138 containerd[1437]: time="2025-02-13T15:36:03.764091268Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\"" Feb 13 15:36:03.764769 containerd[1437]: time="2025-02-13T15:36:03.764743795Z" level=info msg="StartContainer for \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\"" Feb 13 15:36:03.790080 systemd[1]: Started cri-containerd-5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405.scope - libcontainer container 5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405. Feb 13 15:36:03.812431 containerd[1437]: time="2025-02-13T15:36:03.812359357Z" level=info msg="StartContainer for \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\" returns successfully" Feb 13 15:36:03.827146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:36:03.827359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:03.827428 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:36:03.833538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:36:03.833714 systemd[1]: cri-containerd-5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405.scope: Deactivated successfully. Feb 13 15:36:03.850983 containerd[1437]: time="2025-02-13T15:36:03.850905019Z" level=info msg="shim disconnected" id=5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405 namespace=k8s.io Feb 13 15:36:03.850983 containerd[1437]: time="2025-02-13T15:36:03.850979860Z" level=warning msg="cleaning up after shim disconnected" id=5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405 namespace=k8s.io Feb 13 15:36:03.850983 containerd[1437]: time="2025-02-13T15:36:03.850988660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:03.861865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:04.696211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405-rootfs.mount: Deactivated successfully. Feb 13 15:36:04.729588 kubelet[2603]: E0213 15:36:04.729551 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:04.732392 containerd[1437]: time="2025-02-13T15:36:04.732355263Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:36:04.754959 containerd[1437]: time="2025-02-13T15:36:04.754896740Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\"" Feb 13 15:36:04.755388 containerd[1437]: time="2025-02-13T15:36:04.755368105Z" level=info msg="StartContainer for \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\"" Feb 13 15:36:04.788156 systemd[1]: Started cri-containerd-a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e.scope - libcontainer container a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e. Feb 13 15:36:04.816385 containerd[1437]: time="2025-02-13T15:36:04.815965181Z" level=info msg="StartContainer for \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\" returns successfully" Feb 13 15:36:04.836419 systemd[1]: cri-containerd-a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e.scope: Deactivated successfully. Feb 13 15:36:04.855689 containerd[1437]: time="2025-02-13T15:36:04.855614437Z" level=info msg="shim disconnected" id=a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e namespace=k8s.io Feb 13 15:36:04.855689 containerd[1437]: time="2025-02-13T15:36:04.855663518Z" level=warning msg="cleaning up after shim disconnected" id=a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e namespace=k8s.io Feb 13 15:36:04.855689 containerd[1437]: time="2025-02-13T15:36:04.855671678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:05.696218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e-rootfs.mount: Deactivated successfully. Feb 13 15:36:05.733292 kubelet[2603]: E0213 15:36:05.733102 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:05.735503 containerd[1437]: time="2025-02-13T15:36:05.735458796Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:36:05.753377 containerd[1437]: time="2025-02-13T15:36:05.753324616Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\"" Feb 13 15:36:05.753790 containerd[1437]: time="2025-02-13T15:36:05.753765341Z" level=info msg="StartContainer for \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\"" Feb 13 15:36:05.778072 systemd[1]: Started cri-containerd-164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c.scope - libcontainer container 164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c. Feb 13 15:36:05.796264 systemd[1]: cri-containerd-164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c.scope: Deactivated successfully. Feb 13 15:36:05.800205 containerd[1437]: time="2025-02-13T15:36:05.799999886Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice/cri-containerd-164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c.scope/memory.events\": no such file or directory" Feb 13 15:36:05.800205 containerd[1437]: time="2025-02-13T15:36:05.800104927Z" level=info msg="StartContainer for \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\" returns successfully" Feb 13 15:36:05.828379 containerd[1437]: time="2025-02-13T15:36:05.828310211Z" level=info msg="shim disconnected" id=164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c namespace=k8s.io Feb 13 15:36:05.828379 containerd[1437]: time="2025-02-13T15:36:05.828363291Z" level=warning msg="cleaning up after shim disconnected" id=164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c namespace=k8s.io Feb 13 15:36:05.828379 containerd[1437]: time="2025-02-13T15:36:05.828372572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:06.696320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c-rootfs.mount: Deactivated successfully. Feb 13 15:36:06.737185 kubelet[2603]: E0213 15:36:06.737134 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:06.741865 containerd[1437]: time="2025-02-13T15:36:06.741816825Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:36:06.767031 containerd[1437]: time="2025-02-13T15:36:06.766879788Z" level=info msg="CreateContainer within sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\"" Feb 13 15:36:06.766957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626330318.mount: Deactivated successfully. Feb 13 15:36:06.767974 containerd[1437]: time="2025-02-13T15:36:06.767907477Z" level=info msg="StartContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\"" Feb 13 15:36:06.799105 systemd[1]: Started cri-containerd-90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393.scope - libcontainer container 90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393. Feb 13 15:36:06.847753 containerd[1437]: time="2025-02-13T15:36:06.847693088Z" level=info msg="StartContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" returns successfully" Feb 13 15:36:06.959439 kubelet[2603]: I0213 15:36:06.959335 2603 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:36:06.979997 kubelet[2603]: I0213 15:36:06.979950 2603 topology_manager.go:215] "Topology Admit Handler" podUID="f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8wl8d" Feb 13 15:36:06.983489 kubelet[2603]: I0213 15:36:06.983461 2603 topology_manager.go:215] "Topology Admit Handler" podUID="e1a54047-9324-4eac-9eaa-00e05486ce16" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zd62b" Feb 13 15:36:06.992129 systemd[1]: Created slice kubepods-burstable-podf2b9aeda_1b50_4bf0_bb5a_2aad0cab2a6b.slice - libcontainer container kubepods-burstable-podf2b9aeda_1b50_4bf0_bb5a_2aad0cab2a6b.slice. Feb 13 15:36:07.016384 systemd[1]: Created slice kubepods-burstable-pode1a54047_9324_4eac_9eaa_00e05486ce16.slice - libcontainer container kubepods-burstable-pode1a54047_9324_4eac_9eaa_00e05486ce16.slice. Feb 13 15:36:07.093814 kubelet[2603]: I0213 15:36:07.093565 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcx2\" (UniqueName: \"kubernetes.io/projected/e1a54047-9324-4eac-9eaa-00e05486ce16-kube-api-access-5zcx2\") pod \"coredns-7db6d8ff4d-zd62b\" (UID: \"e1a54047-9324-4eac-9eaa-00e05486ce16\") " pod="kube-system/coredns-7db6d8ff4d-zd62b" Feb 13 15:36:07.093814 kubelet[2603]: I0213 15:36:07.093629 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b-config-volume\") pod \"coredns-7db6d8ff4d-8wl8d\" (UID: \"f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b\") " pod="kube-system/coredns-7db6d8ff4d-8wl8d" Feb 13 15:36:07.093814 kubelet[2603]: I0213 15:36:07.093653 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmnv\" (UniqueName: \"kubernetes.io/projected/f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b-kube-api-access-hhmnv\") pod \"coredns-7db6d8ff4d-8wl8d\" (UID: \"f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b\") " pod="kube-system/coredns-7db6d8ff4d-8wl8d" Feb 13 15:36:07.093814 kubelet[2603]: I0213 15:36:07.093677 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1a54047-9324-4eac-9eaa-00e05486ce16-config-volume\") pod \"coredns-7db6d8ff4d-zd62b\" (UID: \"e1a54047-9324-4eac-9eaa-00e05486ce16\") " pod="kube-system/coredns-7db6d8ff4d-zd62b" Feb 13 15:36:07.305524 kubelet[2603]: E0213 15:36:07.305471 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:07.306830 containerd[1437]: time="2025-02-13T15:36:07.306784887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wl8d,Uid:f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:07.318320 kubelet[2603]: E0213 15:36:07.318285 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:07.320000 containerd[1437]: time="2025-02-13T15:36:07.319906369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zd62b,Uid:e1a54047-9324-4eac-9eaa-00e05486ce16,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:07.742647 kubelet[2603]: E0213 15:36:07.742374 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:08.685521 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). Feb 13 15:36:08.730307 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:08.731355 sshd-session[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:08.735503 systemd-logind[1422]: New session 8 of user core. Feb 13 15:36:08.745217 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:36:08.746620 kubelet[2603]: E0213 15:36:08.746501 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:08.883229 sshd[3401]: Connection closed by 10.0.0.1 port 54834 Feb 13 15:36:08.883564 sshd-session[3399]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:08.887746 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:54834.service: Deactivated successfully. Feb 13 15:36:08.889772 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:36:08.890736 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:36:08.891832 systemd-logind[1422]: Removed session 8. Feb 13 15:36:09.478966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142578614.mount: Deactivated successfully. Feb 13 15:36:09.745913 kubelet[2603]: E0213 15:36:09.745796 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:10.033668 containerd[1437]: time="2025-02-13T15:36:10.033609792Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:10.034662 containerd[1437]: time="2025-02-13T15:36:10.034624801Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:36:10.035530 containerd[1437]: time="2025-02-13T15:36:10.035509528Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:10.037041 containerd[1437]: time="2025-02-13T15:36:10.036888140Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.353312549s" Feb 13 15:36:10.037041 containerd[1437]: time="2025-02-13T15:36:10.036945820Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:36:10.040088 containerd[1437]: time="2025-02-13T15:36:10.040043246Z" level=info msg="CreateContainer within sandbox \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:36:10.050808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37085500.mount: Deactivated successfully. Feb 13 15:36:10.051573 containerd[1437]: time="2025-02-13T15:36:10.051523501Z" level=info msg="CreateContainer within sandbox \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\"" Feb 13 15:36:10.052038 containerd[1437]: time="2025-02-13T15:36:10.052000345Z" level=info msg="StartContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\"" Feb 13 15:36:10.088088 systemd[1]: Started cri-containerd-77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53.scope - libcontainer container 77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53. Feb 13 15:36:10.108624 containerd[1437]: time="2025-02-13T15:36:10.108536493Z" level=info msg="StartContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" returns successfully" Feb 13 15:36:10.752272 kubelet[2603]: E0213 15:36:10.752004 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:10.767763 kubelet[2603]: I0213 15:36:10.767119 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sf928" podStartSLOduration=8.898025269 podStartE2EDuration="15.767102143s" podCreationTimestamp="2025-02-13 15:35:55 +0000 UTC" firstStartedPulling="2025-02-13 15:35:55.814176512 +0000 UTC m=+15.235759397" lastFinishedPulling="2025-02-13 15:36:02.683253386 +0000 UTC m=+22.104836271" observedRunningTime="2025-02-13 15:36:07.757279867 +0000 UTC m=+27.178862792" watchObservedRunningTime="2025-02-13 15:36:10.767102143 +0000 UTC m=+30.188685068" Feb 13 15:36:10.767763 kubelet[2603]: I0213 15:36:10.767391 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-74wx8" podStartSLOduration=1.7632287450000002 podStartE2EDuration="15.767383105s" podCreationTimestamp="2025-02-13 15:35:55 +0000 UTC" firstStartedPulling="2025-02-13 15:35:56.033568946 +0000 UTC m=+15.455151871" lastFinishedPulling="2025-02-13 15:36:10.037723306 +0000 UTC m=+29.459306231" observedRunningTime="2025-02-13 15:36:10.766178215 +0000 UTC m=+30.187761140" watchObservedRunningTime="2025-02-13 15:36:10.767383105 +0000 UTC m=+30.188966030" Feb 13 15:36:11.753224 kubelet[2603]: E0213 15:36:11.753181 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:13.894734 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:55020.service - OpenSSH per-connection server daemon (10.0.0.1:55020). Feb 13 15:36:13.942037 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 55020 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:13.943516 sshd-session[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:13.948161 systemd-logind[1422]: New session 9 of user core. Feb 13 15:36:13.962135 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:36:14.009724 systemd-networkd[1375]: cilium_host: Link UP Feb 13 15:36:14.011617 systemd-networkd[1375]: cilium_net: Link UP Feb 13 15:36:14.012148 systemd-networkd[1375]: cilium_net: Gained carrier Feb 13 15:36:14.012624 systemd-networkd[1375]: cilium_host: Gained carrier Feb 13 15:36:14.013609 systemd-networkd[1375]: cilium_net: Gained IPv6LL Feb 13 15:36:14.016103 systemd-networkd[1375]: cilium_host: Gained IPv6LL Feb 13 15:36:14.103897 sshd[3474]: Connection closed by 10.0.0.1 port 55020 Feb 13 15:36:14.104443 sshd-session[3472]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:14.111623 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:55020.service: Deactivated successfully. Feb 13 15:36:14.113271 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:36:14.116460 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:36:14.117060 systemd-networkd[1375]: cilium_vxlan: Link UP Feb 13 15:36:14.117066 systemd-networkd[1375]: cilium_vxlan: Gained carrier Feb 13 15:36:14.119155 systemd-logind[1422]: Removed session 9. Feb 13 15:36:14.438076 kernel: NET: Registered PF_ALG protocol family Feb 13 15:36:14.992089 systemd-networkd[1375]: lxc_health: Link UP Feb 13 15:36:15.004774 systemd-networkd[1375]: lxc_health: Gained carrier Feb 13 15:36:15.178221 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Feb 13 15:36:15.421471 systemd-networkd[1375]: lxc3aa90349b69a: Link UP Feb 13 15:36:15.433954 kernel: eth0: renamed from tmpedf3e Feb 13 15:36:15.442061 systemd-networkd[1375]: lxc31329b857318: Link UP Feb 13 15:36:15.443969 kernel: eth0: renamed from tmp0a53a Feb 13 15:36:15.449474 systemd-networkd[1375]: lxc31329b857318: Gained carrier Feb 13 15:36:15.449682 systemd-networkd[1375]: lxc3aa90349b69a: Gained carrier Feb 13 15:36:15.739491 kubelet[2603]: E0213 15:36:15.739368 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:15.762950 kubelet[2603]: E0213 15:36:15.762872 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:16.778050 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 13 15:36:17.098115 systemd-networkd[1375]: lxc3aa90349b69a: Gained IPv6LL Feb 13 15:36:17.290117 systemd-networkd[1375]: lxc31329b857318: Gained IPv6LL Feb 13 15:36:19.016601 containerd[1437]: time="2025-02-13T15:36:19.016266699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:19.016601 containerd[1437]: time="2025-02-13T15:36:19.016329580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:19.016601 containerd[1437]: time="2025-02-13T15:36:19.016344500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:19.016601 containerd[1437]: time="2025-02-13T15:36:19.016440821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:19.018716 containerd[1437]: time="2025-02-13T15:36:19.018643794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:19.018829 containerd[1437]: time="2025-02-13T15:36:19.018700875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:19.018829 containerd[1437]: time="2025-02-13T15:36:19.018721915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:19.018914 containerd[1437]: time="2025-02-13T15:36:19.018815115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:19.039090 systemd[1]: Started cri-containerd-edf3e419780b99d86b7de894b700322bc723fa859d9f900642f60249d141ab75.scope - libcontainer container edf3e419780b99d86b7de894b700322bc723fa859d9f900642f60249d141ab75. Feb 13 15:36:19.043389 systemd[1]: Started cri-containerd-0a53a18b832b47f342376985fbc7a1d8a2ba84206b9ee33671768971975aeaea.scope - libcontainer container 0a53a18b832b47f342376985fbc7a1d8a2ba84206b9ee33671768971975aeaea. Feb 13 15:36:19.053673 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:36:19.055435 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:36:19.075956 containerd[1437]: time="2025-02-13T15:36:19.075771669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wl8d,Uid:f2b9aeda-1b50-4bf0-bb5a-2aad0cab2a6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a53a18b832b47f342376985fbc7a1d8a2ba84206b9ee33671768971975aeaea\"" Feb 13 15:36:19.076807 kubelet[2603]: E0213 15:36:19.076765 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:19.079542 containerd[1437]: time="2025-02-13T15:36:19.079445932Z" level=info msg="CreateContainer within sandbox \"0a53a18b832b47f342376985fbc7a1d8a2ba84206b9ee33671768971975aeaea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:19.079814 containerd[1437]: time="2025-02-13T15:36:19.079788094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zd62b,Uid:e1a54047-9324-4eac-9eaa-00e05486ce16,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf3e419780b99d86b7de894b700322bc723fa859d9f900642f60249d141ab75\"" Feb 13 15:36:19.080358 kubelet[2603]: E0213 15:36:19.080332 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:19.084172 containerd[1437]: time="2025-02-13T15:36:19.084098320Z" level=info msg="CreateContainer within sandbox \"edf3e419780b99d86b7de894b700322bc723fa859d9f900642f60249d141ab75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:19.104977 containerd[1437]: time="2025-02-13T15:36:19.104868729Z" level=info msg="CreateContainer within sandbox \"0a53a18b832b47f342376985fbc7a1d8a2ba84206b9ee33671768971975aeaea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80a569774d2f1c52925ac076326fdcbdd922ee8e02a7e52b60966c1c97706fff\"" Feb 13 15:36:19.105633 containerd[1437]: time="2025-02-13T15:36:19.105606734Z" level=info msg="StartContainer for \"80a569774d2f1c52925ac076326fdcbdd922ee8e02a7e52b60966c1c97706fff\"" Feb 13 15:36:19.108263 containerd[1437]: time="2025-02-13T15:36:19.108065189Z" level=info msg="CreateContainer within sandbox \"edf3e419780b99d86b7de894b700322bc723fa859d9f900642f60249d141ab75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d38574584a7bdf7fdfcb84facb085bff5c98a20669d8a9ac55254e3f11daf22\"" Feb 13 15:36:19.108691 containerd[1437]: time="2025-02-13T15:36:19.108489512Z" level=info msg="StartContainer for \"6d38574584a7bdf7fdfcb84facb085bff5c98a20669d8a9ac55254e3f11daf22\"" Feb 13 15:36:19.122384 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). Feb 13 15:36:19.137758 systemd[1]: Started cri-containerd-80a569774d2f1c52925ac076326fdcbdd922ee8e02a7e52b60966c1c97706fff.scope - libcontainer container 80a569774d2f1c52925ac076326fdcbdd922ee8e02a7e52b60966c1c97706fff. Feb 13 15:36:19.148106 systemd[1]: Started cri-containerd-6d38574584a7bdf7fdfcb84facb085bff5c98a20669d8a9ac55254e3f11daf22.scope - libcontainer container 6d38574584a7bdf7fdfcb84facb085bff5c98a20669d8a9ac55254e3f11daf22. Feb 13 15:36:19.164901 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:19.166780 sshd-session[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:19.170343 containerd[1437]: time="2025-02-13T15:36:19.170277455Z" level=info msg="StartContainer for \"80a569774d2f1c52925ac076326fdcbdd922ee8e02a7e52b60966c1c97706fff\" returns successfully" Feb 13 15:36:19.176037 systemd-logind[1422]: New session 10 of user core. Feb 13 15:36:19.187081 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:36:19.192993 containerd[1437]: time="2025-02-13T15:36:19.192950396Z" level=info msg="StartContainer for \"6d38574584a7bdf7fdfcb84facb085bff5c98a20669d8a9ac55254e3f11daf22\" returns successfully" Feb 13 15:36:19.330866 sshd[4018]: Connection closed by 10.0.0.1 port 55032 Feb 13 15:36:19.331184 sshd-session[3956]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:19.335796 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:55032.service: Deactivated successfully. Feb 13 15:36:19.341386 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:36:19.342882 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:36:19.344034 systemd-logind[1422]: Removed session 10. Feb 13 15:36:19.771793 kubelet[2603]: E0213 15:36:19.771748 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:19.777579 kubelet[2603]: E0213 15:36:19.777474 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:19.802218 kubelet[2603]: I0213 15:36:19.802056 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zd62b" podStartSLOduration=24.802042056 podStartE2EDuration="24.802042056s" podCreationTimestamp="2025-02-13 15:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:19.801790335 +0000 UTC m=+39.223373260" watchObservedRunningTime="2025-02-13 15:36:19.802042056 +0000 UTC m=+39.223624981" Feb 13 15:36:19.829846 kubelet[2603]: I0213 15:36:19.829776 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8wl8d" podStartSLOduration=24.829757948 podStartE2EDuration="24.829757948s" podCreationTimestamp="2025-02-13 15:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:19.816975429 +0000 UTC m=+39.238558394" watchObservedRunningTime="2025-02-13 15:36:19.829757948 +0000 UTC m=+39.251340873" Feb 13 15:36:20.022401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062373467.mount: Deactivated successfully. Feb 13 15:36:20.774718 kubelet[2603]: E0213 15:36:20.774482 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:20.774718 kubelet[2603]: E0213 15:36:20.774637 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:21.775840 kubelet[2603]: E0213 15:36:21.775802 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:21.775840 kubelet[2603]: E0213 15:36:21.775836 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:24.344788 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:59970.service - OpenSSH per-connection server daemon (10.0.0.1:59970). Feb 13 15:36:24.405039 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 59970 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:24.406364 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:24.410260 systemd-logind[1422]: New session 11 of user core. Feb 13 15:36:24.423114 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:36:24.541231 sshd[4059]: Connection closed by 10.0.0.1 port 59970 Feb 13 15:36:24.541580 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:24.544056 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:59970.service: Deactivated successfully. Feb 13 15:36:24.545599 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:36:24.546879 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:36:24.547948 systemd-logind[1422]: Removed session 11. Feb 13 15:36:29.553686 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:59982.service - OpenSSH per-connection server daemon (10.0.0.1:59982). Feb 13 15:36:29.597450 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 59982 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:29.598719 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:29.602742 systemd-logind[1422]: New session 12 of user core. Feb 13 15:36:29.612101 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:36:29.730690 sshd[4076]: Connection closed by 10.0.0.1 port 59982 Feb 13 15:36:29.731579 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:29.743560 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:59982.service: Deactivated successfully. Feb 13 15:36:29.745456 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:36:29.748467 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:36:29.754224 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:59992.service - OpenSSH per-connection server daemon (10.0.0.1:59992). Feb 13 15:36:29.755139 systemd-logind[1422]: Removed session 12. Feb 13 15:36:29.797138 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 59992 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:29.798777 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:29.802760 systemd-logind[1422]: New session 13 of user core. Feb 13 15:36:29.815107 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:36:29.983426 sshd[4092]: Connection closed by 10.0.0.1 port 59992 Feb 13 15:36:29.984249 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:29.996620 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:59992.service: Deactivated successfully. Feb 13 15:36:30.001232 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:36:30.007127 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:36:30.023360 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Feb 13 15:36:30.024482 systemd-logind[1422]: Removed session 13. Feb 13 15:36:30.062912 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:30.064349 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:30.068570 systemd-logind[1422]: New session 14 of user core. Feb 13 15:36:30.081145 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:36:30.200980 sshd[4104]: Connection closed by 10.0.0.1 port 60006 Feb 13 15:36:30.201585 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:30.205138 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:60006.service: Deactivated successfully. Feb 13 15:36:30.207206 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:36:30.207958 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:36:30.209006 systemd-logind[1422]: Removed session 14. Feb 13 15:36:35.220608 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:57324.service - OpenSSH per-connection server daemon (10.0.0.1:57324). Feb 13 15:36:35.274162 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 57324 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:35.275454 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:35.279569 systemd-logind[1422]: New session 15 of user core. Feb 13 15:36:35.288130 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:36:35.408692 sshd[4118]: Connection closed by 10.0.0.1 port 57324 Feb 13 15:36:35.409418 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:35.412529 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:57324.service: Deactivated successfully. Feb 13 15:36:35.415294 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:36:35.415483 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:36:35.416619 systemd-logind[1422]: Removed session 15. Feb 13 15:36:40.420286 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:57340.service - OpenSSH per-connection server daemon (10.0.0.1:57340). Feb 13 15:36:40.460193 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 57340 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:40.462030 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:40.465950 systemd-logind[1422]: New session 16 of user core. Feb 13 15:36:40.476158 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:36:40.601804 sshd[4133]: Connection closed by 10.0.0.1 port 57340 Feb 13 15:36:40.602187 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:40.612603 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:57340.service: Deactivated successfully. Feb 13 15:36:40.614337 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:36:40.615637 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:36:40.617811 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:57352.service - OpenSSH per-connection server daemon (10.0.0.1:57352). Feb 13 15:36:40.618817 systemd-logind[1422]: Removed session 16. Feb 13 15:36:40.658795 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 57352 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:40.660222 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:40.664205 systemd-logind[1422]: New session 17 of user core. Feb 13 15:36:40.671092 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:36:40.890058 sshd[4148]: Connection closed by 10.0.0.1 port 57352 Feb 13 15:36:40.891834 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:40.901315 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:57352.service: Deactivated successfully. Feb 13 15:36:40.903411 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:36:40.904648 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:36:40.906023 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:57358.service - OpenSSH per-connection server daemon (10.0.0.1:57358). Feb 13 15:36:40.906854 systemd-logind[1422]: Removed session 17. Feb 13 15:36:40.954819 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 57358 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:40.956119 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:40.960324 systemd-logind[1422]: New session 18 of user core. Feb 13 15:36:40.969100 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:36:42.246011 sshd[4162]: Connection closed by 10.0.0.1 port 57358 Feb 13 15:36:42.246448 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:42.257012 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:57358.service: Deactivated successfully. Feb 13 15:36:42.261863 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:36:42.264185 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:36:42.272565 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:57372.service - OpenSSH per-connection server daemon (10.0.0.1:57372). Feb 13 15:36:42.274236 systemd-logind[1422]: Removed session 18. Feb 13 15:36:42.309274 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 57372 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:42.309911 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:42.314438 systemd-logind[1422]: New session 19 of user core. Feb 13 15:36:42.321082 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:36:42.542613 sshd[4182]: Connection closed by 10.0.0.1 port 57372 Feb 13 15:36:42.543033 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:42.559586 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:57372.service: Deactivated successfully. Feb 13 15:36:42.563299 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:36:42.564550 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:36:42.573412 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:60438.service - OpenSSH per-connection server daemon (10.0.0.1:60438). Feb 13 15:36:42.574518 systemd-logind[1422]: Removed session 19. Feb 13 15:36:42.612193 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 60438 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:42.613536 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:42.617545 systemd-logind[1422]: New session 20 of user core. Feb 13 15:36:42.639140 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:36:42.746891 sshd[4196]: Connection closed by 10.0.0.1 port 60438 Feb 13 15:36:42.747271 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:42.750490 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:60438.service: Deactivated successfully. Feb 13 15:36:42.753751 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:36:42.754574 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:36:42.755428 systemd-logind[1422]: Removed session 20. Feb 13 15:36:47.757371 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:60454.service - OpenSSH per-connection server daemon (10.0.0.1:60454). Feb 13 15:36:47.794885 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 60454 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:47.796054 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:47.799579 systemd-logind[1422]: New session 21 of user core. Feb 13 15:36:47.810098 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:36:47.916730 sshd[4213]: Connection closed by 10.0.0.1 port 60454 Feb 13 15:36:47.917085 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:47.920593 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:60454.service: Deactivated successfully. Feb 13 15:36:47.923101 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:36:47.923782 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:36:47.924566 systemd-logind[1422]: Removed session 21. Feb 13 15:36:52.927666 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:57204.service - OpenSSH per-connection server daemon (10.0.0.1:57204). Feb 13 15:36:52.970998 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 57204 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:52.972199 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:52.977576 systemd-logind[1422]: New session 22 of user core. Feb 13 15:36:52.984081 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:36:53.102732 sshd[4227]: Connection closed by 10.0.0.1 port 57204 Feb 13 15:36:53.103153 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:53.106318 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:57204.service: Deactivated successfully. Feb 13 15:36:53.108469 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:36:53.110515 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:36:53.113348 systemd-logind[1422]: Removed session 22. Feb 13 15:36:56.675282 kubelet[2603]: E0213 15:36:56.675163 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.675435 kubelet[2603]: E0213 15:36:57.675389 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.120142 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:57206.service - OpenSSH per-connection server daemon (10.0.0.1:57206). Feb 13 15:36:58.158949 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 57206 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:36:58.160339 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:58.165102 systemd-logind[1422]: New session 23 of user core. Feb 13 15:36:58.175085 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:36:58.282399 sshd[4244]: Connection closed by 10.0.0.1 port 57206 Feb 13 15:36:58.282749 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:58.285986 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:57206.service: Deactivated successfully. Feb 13 15:36:58.289334 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:36:58.290058 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:36:58.290813 systemd-logind[1422]: Removed session 23. Feb 13 15:36:59.675100 kubelet[2603]: E0213 15:36:59.675069 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:03.295779 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Feb 13 15:37:03.339825 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:37:03.341113 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:03.345144 systemd-logind[1422]: New session 24 of user core. Feb 13 15:37:03.355159 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:37:03.461425 sshd[4258]: Connection closed by 10.0.0.1 port 45434 Feb 13 15:37:03.463083 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:03.472395 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:45434.service: Deactivated successfully. Feb 13 15:37:03.475328 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:37:03.476700 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:37:03.481228 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:45444.service - OpenSSH per-connection server daemon (10.0.0.1:45444). Feb 13 15:37:03.482107 systemd-logind[1422]: Removed session 24. Feb 13 15:37:03.516582 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 45444 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:37:03.517490 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:03.521738 systemd-logind[1422]: New session 25 of user core. Feb 13 15:37:03.534087 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:37:05.015050 containerd[1437]: time="2025-02-13T15:37:05.015008786Z" level=info msg="StopContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" with timeout 30 (s)" Feb 13 15:37:05.016481 containerd[1437]: time="2025-02-13T15:37:05.016454888Z" level=info msg="Stop container \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" with signal terminated" Feb 13 15:37:05.026532 systemd[1]: cri-containerd-77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53.scope: Deactivated successfully. Feb 13 15:37:05.050516 containerd[1437]: time="2025-02-13T15:37:05.050447323Z" level=info msg="StopContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" with timeout 2 (s)" Feb 13 15:37:05.052276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53-rootfs.mount: Deactivated successfully. Feb 13 15:37:05.052736 containerd[1437]: time="2025-02-13T15:37:05.052356712Z" level=info msg="Stop container \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" with signal terminated" Feb 13 15:37:05.054300 containerd[1437]: time="2025-02-13T15:37:05.054121419Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:37:05.060086 systemd-networkd[1375]: lxc_health: Link DOWN Feb 13 15:37:05.060092 systemd-networkd[1375]: lxc_health: Lost carrier Feb 13 15:37:05.062724 containerd[1437]: time="2025-02-13T15:37:05.062673029Z" level=info msg="shim disconnected" id=77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53 namespace=k8s.io Feb 13 15:37:05.062724 containerd[1437]: time="2025-02-13T15:37:05.062724789Z" level=warning msg="cleaning up after shim disconnected" id=77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53 namespace=k8s.io Feb 13 15:37:05.062876 containerd[1437]: time="2025-02-13T15:37:05.062733669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:05.087140 systemd[1]: cri-containerd-90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393.scope: Deactivated successfully. Feb 13 15:37:05.087406 systemd[1]: cri-containerd-90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393.scope: Consumed 6.507s CPU time. Feb 13 15:37:05.108782 containerd[1437]: time="2025-02-13T15:37:05.108730206Z" level=info msg="StopContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" returns successfully" Feb 13 15:37:05.109746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393-rootfs.mount: Deactivated successfully. Feb 13 15:37:05.111600 containerd[1437]: time="2025-02-13T15:37:05.111553809Z" level=info msg="StopPodSandbox for \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\"" Feb 13 15:37:05.113022 containerd[1437]: time="2025-02-13T15:37:05.112974391Z" level=info msg="shim disconnected" id=90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393 namespace=k8s.io Feb 13 15:37:05.113022 containerd[1437]: time="2025-02-13T15:37:05.113020471Z" level=warning msg="cleaning up after shim disconnected" id=90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393 namespace=k8s.io Feb 13 15:37:05.113115 containerd[1437]: time="2025-02-13T15:37:05.113030271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:05.115639 containerd[1437]: time="2025-02-13T15:37:05.115596590Z" level=info msg="Container to stop \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.127316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5-shm.mount: Deactivated successfully. Feb 13 15:37:05.128292 containerd[1437]: time="2025-02-13T15:37:05.128257942Z" level=info msg="StopContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" returns successfully" Feb 13 15:37:05.129104 containerd[1437]: time="2025-02-13T15:37:05.129068514Z" level=info msg="StopPodSandbox for \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\"" Feb 13 15:37:05.129364 containerd[1437]: time="2025-02-13T15:37:05.129216997Z" level=info msg="Container to stop \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.129364 containerd[1437]: time="2025-02-13T15:37:05.129238197Z" level=info msg="Container to stop \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.129364 containerd[1437]: time="2025-02-13T15:37:05.129247877Z" level=info msg="Container to stop \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.129364 containerd[1437]: time="2025-02-13T15:37:05.129255837Z" level=info msg="Container to stop \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.129364 containerd[1437]: time="2025-02-13T15:37:05.129264397Z" level=info msg="Container to stop \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:05.131002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326-shm.mount: Deactivated successfully. Feb 13 15:37:05.134002 systemd[1]: cri-containerd-1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5.scope: Deactivated successfully. Feb 13 15:37:05.145363 systemd[1]: cri-containerd-b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326.scope: Deactivated successfully. Feb 13 15:37:05.172911 containerd[1437]: time="2025-02-13T15:37:05.172686575Z" level=info msg="shim disconnected" id=1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5 namespace=k8s.io Feb 13 15:37:05.173086 containerd[1437]: time="2025-02-13T15:37:05.172911539Z" level=warning msg="cleaning up after shim disconnected" id=1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5 namespace=k8s.io Feb 13 15:37:05.173086 containerd[1437]: time="2025-02-13T15:37:05.172989620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:05.173554 containerd[1437]: time="2025-02-13T15:37:05.173497547Z" level=info msg="shim disconnected" id=b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326 namespace=k8s.io Feb 13 15:37:05.174285 containerd[1437]: time="2025-02-13T15:37:05.173555028Z" level=warning msg="cleaning up after shim disconnected" id=b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326 namespace=k8s.io Feb 13 15:37:05.174353 containerd[1437]: time="2025-02-13T15:37:05.174287519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:05.187472 containerd[1437]: time="2025-02-13T15:37:05.187418078Z" level=info msg="TearDown network for sandbox \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" successfully" Feb 13 15:37:05.187472 containerd[1437]: time="2025-02-13T15:37:05.187459839Z" level=info msg="StopPodSandbox for \"b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326\" returns successfully" Feb 13 15:37:05.189959 containerd[1437]: time="2025-02-13T15:37:05.189744674Z" level=info msg="TearDown network for sandbox \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\" successfully" Feb 13 15:37:05.189959 containerd[1437]: time="2025-02-13T15:37:05.189773674Z" level=info msg="StopPodSandbox for \"1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5\" returns successfully" Feb 13 15:37:05.329250 kubelet[2603]: I0213 15:37:05.329125 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-xtables-lock\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329250 kubelet[2603]: I0213 15:37:05.329205 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqmkz\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-kube-api-access-dqmkz\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329282 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-hubble-tls\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329306 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-kernel\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329323 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-lib-modules\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329338 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-hostproc\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329391 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-etc-cni-netd\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329639 kubelet[2603]: I0213 15:37:05.329414 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86f589ff-009e-4d4b-9d1b-322288d2106e-clustermesh-secrets\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329447 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cni-path\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329466 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-run\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329480 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-net\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329498 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580dab04-9017-43f1-8ccb-fd5463c3bd0b-cilium-config-path\") pod \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\" (UID: \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329518 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-config-path\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.329863 kubelet[2603]: I0213 15:37:05.329535 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgdfg\" (UniqueName: \"kubernetes.io/projected/580dab04-9017-43f1-8ccb-fd5463c3bd0b-kube-api-access-cgdfg\") pod \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\" (UID: \"580dab04-9017-43f1-8ccb-fd5463c3bd0b\") " Feb 13 15:37:05.330009 kubelet[2603]: I0213 15:37:05.329549 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-bpf-maps\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.330009 kubelet[2603]: I0213 15:37:05.329567 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-cgroup\") pod \"86f589ff-009e-4d4b-9d1b-322288d2106e\" (UID: \"86f589ff-009e-4d4b-9d1b-322288d2106e\") " Feb 13 15:37:05.335272 kubelet[2603]: I0213 15:37:05.335220 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.335392 kubelet[2603]: I0213 15:37:05.335312 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cni-path" (OuterVolumeSpecName: "cni-path") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.335392 kubelet[2603]: I0213 15:37:05.335332 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.335392 kubelet[2603]: I0213 15:37:05.335349 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.335392 kubelet[2603]: I0213 15:37:05.335363 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-hostproc" (OuterVolumeSpecName: "hostproc") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.335392 kubelet[2603]: I0213 15:37:05.335377 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.336381 kubelet[2603]: I0213 15:37:05.336348 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.336444 kubelet[2603]: I0213 15:37:05.336412 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.336444 kubelet[2603]: I0213 15:37:05.336431 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.348529 kubelet[2603]: I0213 15:37:05.342177 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580dab04-9017-43f1-8ccb-fd5463c3bd0b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "580dab04-9017-43f1-8ccb-fd5463c3bd0b" (UID: "580dab04-9017-43f1-8ccb-fd5463c3bd0b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:37:05.348529 kubelet[2603]: I0213 15:37:05.344155 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:37:05.349010 kubelet[2603]: I0213 15:37:05.348735 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:37:05.350794 kubelet[2603]: I0213 15:37:05.350763 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:37:05.351248 kubelet[2603]: I0213 15:37:05.351223 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580dab04-9017-43f1-8ccb-fd5463c3bd0b-kube-api-access-cgdfg" (OuterVolumeSpecName: "kube-api-access-cgdfg") pod "580dab04-9017-43f1-8ccb-fd5463c3bd0b" (UID: "580dab04-9017-43f1-8ccb-fd5463c3bd0b"). InnerVolumeSpecName "kube-api-access-cgdfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:37:05.351426 kubelet[2603]: I0213 15:37:05.351331 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-kube-api-access-dqmkz" (OuterVolumeSpecName: "kube-api-access-dqmkz") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "kube-api-access-dqmkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:37:05.351518 kubelet[2603]: I0213 15:37:05.351488 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f589ff-009e-4d4b-9d1b-322288d2106e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86f589ff-009e-4d4b-9d1b-322288d2106e" (UID: "86f589ff-009e-4d4b-9d1b-322288d2106e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:37:05.429795 kubelet[2603]: I0213 15:37:05.429756 2603 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429795 kubelet[2603]: I0213 15:37:05.429791 2603 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429805 2603 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429823 2603 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429832 2603 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429840 2603 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86f589ff-009e-4d4b-9d1b-322288d2106e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429847 2603 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429878 2603 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429886 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580dab04-9017-43f1-8ccb-fd5463c3bd0b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.429982 kubelet[2603]: I0213 15:37:05.429894 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429901 2603 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cgdfg\" (UniqueName: \"kubernetes.io/projected/580dab04-9017-43f1-8ccb-fd5463c3bd0b-kube-api-access-cgdfg\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429909 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429938 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429947 2603 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429954 2603 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f589ff-009e-4d4b-9d1b-322288d2106e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.430138 kubelet[2603]: I0213 15:37:05.429963 2603 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dqmkz\" (UniqueName: \"kubernetes.io/projected/86f589ff-009e-4d4b-9d1b-322288d2106e-kube-api-access-dqmkz\") on node \"localhost\" DevicePath \"\"" Feb 13 15:37:05.722080 kubelet[2603]: E0213 15:37:05.721950 2603 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:37:05.871787 kubelet[2603]: I0213 15:37:05.871753 2603 scope.go:117] "RemoveContainer" containerID="90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393" Feb 13 15:37:05.874594 containerd[1437]: time="2025-02-13T15:37:05.874560648Z" level=info msg="RemoveContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\"" Feb 13 15:37:05.877456 systemd[1]: Removed slice kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice - libcontainer container kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice. Feb 13 15:37:05.877672 systemd[1]: kubepods-burstable-pod86f589ff_009e_4d4b_9d1b_322288d2106e.slice: Consumed 6.638s CPU time. Feb 13 15:37:05.882597 systemd[1]: Removed slice kubepods-besteffort-pod580dab04_9017_43f1_8ccb_fd5463c3bd0b.slice - libcontainer container kubepods-besteffort-pod580dab04_9017_43f1_8ccb_fd5463c3bd0b.slice. Feb 13 15:37:05.884409 containerd[1437]: time="2025-02-13T15:37:05.884380357Z" level=info msg="RemoveContainer for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" returns successfully" Feb 13 15:37:05.886123 kubelet[2603]: I0213 15:37:05.886094 2603 scope.go:117] "RemoveContainer" containerID="164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c" Feb 13 15:37:05.887177 containerd[1437]: time="2025-02-13T15:37:05.887149799Z" level=info msg="RemoveContainer for \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\"" Feb 13 15:37:05.890060 containerd[1437]: time="2025-02-13T15:37:05.889989442Z" level=info msg="RemoveContainer for \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\" returns successfully" Feb 13 15:37:05.890486 kubelet[2603]: I0213 15:37:05.890130 2603 scope.go:117] "RemoveContainer" containerID="a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e" Feb 13 15:37:05.891463 containerd[1437]: time="2025-02-13T15:37:05.891424224Z" level=info msg="RemoveContainer for \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\"" Feb 13 15:37:05.894806 containerd[1437]: time="2025-02-13T15:37:05.894772914Z" level=info msg="RemoveContainer for \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\" returns successfully" Feb 13 15:37:05.894961 kubelet[2603]: I0213 15:37:05.894927 2603 scope.go:117] "RemoveContainer" containerID="5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405" Feb 13 15:37:05.896009 containerd[1437]: time="2025-02-13T15:37:05.895988613Z" level=info msg="RemoveContainer for \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\"" Feb 13 15:37:05.898378 containerd[1437]: time="2025-02-13T15:37:05.898338688Z" level=info msg="RemoveContainer for \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\" returns successfully" Feb 13 15:37:05.898591 kubelet[2603]: I0213 15:37:05.898499 2603 scope.go:117] "RemoveContainer" containerID="00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982" Feb 13 15:37:05.899716 containerd[1437]: time="2025-02-13T15:37:05.899687149Z" level=info msg="RemoveContainer for \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\"" Feb 13 15:37:05.902276 containerd[1437]: time="2025-02-13T15:37:05.902231547Z" level=info msg="RemoveContainer for \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\" returns successfully" Feb 13 15:37:05.902463 kubelet[2603]: I0213 15:37:05.902422 2603 scope.go:117] "RemoveContainer" containerID="90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393" Feb 13 15:37:05.902659 containerd[1437]: time="2025-02-13T15:37:05.902621833Z" level=error msg="ContainerStatus for \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\": not found" Feb 13 15:37:05.903988 kubelet[2603]: E0213 15:37:05.903962 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\": not found" containerID="90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393" Feb 13 15:37:05.904080 kubelet[2603]: I0213 15:37:05.903998 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393"} err="failed to get container status \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\": rpc error: code = NotFound desc = an error occurred when try to find container \"90d730b45d3147b220b4c2d28e60794a35605fb6201bd02b4c79c5a0b9e6c393\": not found" Feb 13 15:37:05.904108 kubelet[2603]: I0213 15:37:05.904083 2603 scope.go:117] "RemoveContainer" containerID="164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c" Feb 13 15:37:05.904290 containerd[1437]: time="2025-02-13T15:37:05.904259818Z" level=error msg="ContainerStatus for \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\": not found" Feb 13 15:37:05.904418 kubelet[2603]: E0213 15:37:05.904394 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\": not found" containerID="164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c" Feb 13 15:37:05.904459 kubelet[2603]: I0213 15:37:05.904425 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c"} err="failed to get container status \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"164523ec3ffa7e9ec2b6ef3c4a26df0b7a92032841175d79b5098efa66bdfe2c\": not found" Feb 13 15:37:05.904459 kubelet[2603]: I0213 15:37:05.904450 2603 scope.go:117] "RemoveContainer" containerID="a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e" Feb 13 15:37:05.904650 containerd[1437]: time="2025-02-13T15:37:05.904622224Z" level=error msg="ContainerStatus for \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\": not found" Feb 13 15:37:05.904741 kubelet[2603]: E0213 15:37:05.904723 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\": not found" containerID="a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e" Feb 13 15:37:05.904773 kubelet[2603]: I0213 15:37:05.904745 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e"} err="failed to get container status \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1f695cdd8d088d2e726531cf7c4c89043cda482e4fffa4457782d7b8a50be5e\": not found" Feb 13 15:37:05.904773 kubelet[2603]: I0213 15:37:05.904759 2603 scope.go:117] "RemoveContainer" containerID="5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405" Feb 13 15:37:05.905006 containerd[1437]: time="2025-02-13T15:37:05.904971629Z" level=error msg="ContainerStatus for \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\": not found" Feb 13 15:37:05.905085 kubelet[2603]: E0213 15:37:05.905067 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\": not found" containerID="5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405" Feb 13 15:37:05.905112 kubelet[2603]: I0213 15:37:05.905088 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405"} err="failed to get container status \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d16f6920c2bda0d9375c5cd56afc093eecf71d3cd1c067d5def135f829df405\": not found" Feb 13 15:37:05.905112 kubelet[2603]: I0213 15:37:05.905101 2603 scope.go:117] "RemoveContainer" containerID="00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982" Feb 13 15:37:05.905244 containerd[1437]: time="2025-02-13T15:37:05.905219913Z" level=error msg="ContainerStatus for \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\": not found" Feb 13 15:37:05.905322 kubelet[2603]: E0213 15:37:05.905306 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\": not found" containerID="00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982" Feb 13 15:37:05.905349 kubelet[2603]: I0213 15:37:05.905325 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982"} err="failed to get container status \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\": rpc error: code = NotFound desc = an error occurred when try to find container \"00d58fcf70f1da85ad7d92b325d934155db977d13b7ed0e559b0aa161e377982\": not found" Feb 13 15:37:05.905375 kubelet[2603]: I0213 15:37:05.905362 2603 scope.go:117] "RemoveContainer" containerID="77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53" Feb 13 15:37:05.906331 containerd[1437]: time="2025-02-13T15:37:05.906311209Z" level=info msg="RemoveContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\"" Feb 13 15:37:05.908619 containerd[1437]: time="2025-02-13T15:37:05.908582644Z" level=info msg="RemoveContainer for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" returns successfully" Feb 13 15:37:05.908828 kubelet[2603]: I0213 15:37:05.908805 2603 scope.go:117] "RemoveContainer" containerID="77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53" Feb 13 15:37:05.909007 containerd[1437]: time="2025-02-13T15:37:05.908981210Z" level=error msg="ContainerStatus for \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\": not found" Feb 13 15:37:05.909129 kubelet[2603]: E0213 15:37:05.909104 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\": not found" containerID="77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53" Feb 13 15:37:05.909167 kubelet[2603]: I0213 15:37:05.909130 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53"} err="failed to get container status \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\": rpc error: code = NotFound desc = an error occurred when try to find container \"77e93b2d20209f133fd2e8980eda966fca8ff0a3b434035370b8cef8662dcb53\": not found" Feb 13 15:37:06.032388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1686a3f22cd5f737449c1a86e4c4e655713c54118be6ebd756e16b8cb5a1dda5-rootfs.mount: Deactivated successfully. Feb 13 15:37:06.032490 systemd[1]: var-lib-kubelet-pods-580dab04\x2d9017\x2d43f1\x2d8ccb\x2dfd5463c3bd0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgdfg.mount: Deactivated successfully. Feb 13 15:37:06.032545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f84d98419a77ff259fdf4fcc203123ee33c9f8443f494ffbe3a3bdfafc2326-rootfs.mount: Deactivated successfully. Feb 13 15:37:06.032595 systemd[1]: var-lib-kubelet-pods-86f589ff\x2d009e\x2d4d4b\x2d9d1b\x2d322288d2106e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddqmkz.mount: Deactivated successfully. Feb 13 15:37:06.032647 systemd[1]: var-lib-kubelet-pods-86f589ff\x2d009e\x2d4d4b\x2d9d1b\x2d322288d2106e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:37:06.032692 systemd[1]: var-lib-kubelet-pods-86f589ff\x2d009e\x2d4d4b\x2d9d1b\x2d322288d2106e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:37:06.677582 kubelet[2603]: I0213 15:37:06.677541 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580dab04-9017-43f1-8ccb-fd5463c3bd0b" path="/var/lib/kubelet/pods/580dab04-9017-43f1-8ccb-fd5463c3bd0b/volumes" Feb 13 15:37:06.677963 kubelet[2603]: I0213 15:37:06.677946 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" path="/var/lib/kubelet/pods/86f589ff-009e-4d4b-9d1b-322288d2106e/volumes" Feb 13 15:37:06.978932 sshd[4273]: Connection closed by 10.0.0.1 port 45444 Feb 13 15:37:06.979450 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:06.992515 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:45444.service: Deactivated successfully. Feb 13 15:37:06.994107 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:37:06.995452 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:37:06.996778 systemd[1]: Started sshd@25-10.0.0.102:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Feb 13 15:37:06.998084 systemd-logind[1422]: Removed session 25. Feb 13 15:37:07.037981 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:37:07.039336 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:07.044180 systemd-logind[1422]: New session 26 of user core. Feb 13 15:37:07.054092 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:37:08.363232 sshd[4433]: Connection closed by 10.0.0.1 port 45460 Feb 13 15:37:08.365145 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:08.375952 kubelet[2603]: I0213 15:37:08.372140 2603 topology_manager.go:215] "Topology Admit Handler" podUID="94c46ee7-7dae-4f1c-8312-664653502b0e" podNamespace="kube-system" podName="cilium-vvpsb" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372564 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="cilium-agent" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372578 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="mount-cgroup" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372585 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="580dab04-9017-43f1-8ccb-fd5463c3bd0b" containerName="cilium-operator" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372591 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="apply-sysctl-overwrites" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372596 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="mount-bpf-fs" Feb 13 15:37:08.375952 kubelet[2603]: E0213 15:37:08.372602 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="clean-cilium-state" Feb 13 15:37:08.375952 kubelet[2603]: I0213 15:37:08.372625 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f589ff-009e-4d4b-9d1b-322288d2106e" containerName="cilium-agent" Feb 13 15:37:08.375952 kubelet[2603]: I0213 15:37:08.372637 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="580dab04-9017-43f1-8ccb-fd5463c3bd0b" containerName="cilium-operator" Feb 13 15:37:08.375747 systemd[1]: sshd@25-10.0.0.102:22-10.0.0.1:45460.service: Deactivated successfully. Feb 13 15:37:08.378795 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:37:08.379004 systemd[1]: session-26.scope: Consumed 1.219s CPU time. Feb 13 15:37:08.380586 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:37:08.390037 systemd[1]: Started sshd@26-10.0.0.102:22-10.0.0.1:45470.service - OpenSSH per-connection server daemon (10.0.0.1:45470). Feb 13 15:37:08.395877 systemd-logind[1422]: Removed session 26. Feb 13 15:37:08.408746 systemd[1]: Created slice kubepods-burstable-pod94c46ee7_7dae_4f1c_8312_664653502b0e.slice - libcontainer container kubepods-burstable-pod94c46ee7_7dae_4f1c_8312_664653502b0e.slice. Feb 13 15:37:08.437770 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 45470 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:37:08.439147 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:08.442590 systemd-logind[1422]: New session 27 of user core. Feb 13 15:37:08.456080 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:37:08.505266 sshd[4446]: Connection closed by 10.0.0.1 port 45470 Feb 13 15:37:08.505274 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:08.518564 systemd[1]: sshd@26-10.0.0.102:22-10.0.0.1:45470.service: Deactivated successfully. Feb 13 15:37:08.521128 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:37:08.524425 systemd-logind[1422]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:37:08.527168 systemd[1]: Started sshd@27-10.0.0.102:22-10.0.0.1:45482.service - OpenSSH per-connection server daemon (10.0.0.1:45482). Feb 13 15:37:08.528831 systemd-logind[1422]: Removed session 27. Feb 13 15:37:08.543627 kubelet[2603]: I0213 15:37:08.543585 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94c46ee7-7dae-4f1c-8312-664653502b0e-clustermesh-secrets\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543717 kubelet[2603]: I0213 15:37:08.543628 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94c46ee7-7dae-4f1c-8312-664653502b0e-cilium-ipsec-secrets\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543717 kubelet[2603]: I0213 15:37:08.543649 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94c46ee7-7dae-4f1c-8312-664653502b0e-cilium-config-path\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543717 kubelet[2603]: I0213 15:37:08.543668 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-cilium-cgroup\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543717 kubelet[2603]: I0213 15:37:08.543687 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8qp\" (UniqueName: \"kubernetes.io/projected/94c46ee7-7dae-4f1c-8312-664653502b0e-kube-api-access-ds8qp\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543804 kubelet[2603]: I0213 15:37:08.543740 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-lib-modules\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543804 kubelet[2603]: I0213 15:37:08.543776 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-xtables-lock\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543804 kubelet[2603]: I0213 15:37:08.543798 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-cni-path\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543860 kubelet[2603]: I0213 15:37:08.543816 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-etc-cni-netd\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543860 kubelet[2603]: I0213 15:37:08.543843 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-host-proc-sys-net\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543860 kubelet[2603]: I0213 15:37:08.543858 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94c46ee7-7dae-4f1c-8312-664653502b0e-hubble-tls\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543931 kubelet[2603]: I0213 15:37:08.543873 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-bpf-maps\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543931 kubelet[2603]: I0213 15:37:08.543890 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-cilium-run\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543931 kubelet[2603]: I0213 15:37:08.543905 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-hostproc\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.543989 kubelet[2603]: I0213 15:37:08.543934 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94c46ee7-7dae-4f1c-8312-664653502b0e-host-proc-sys-kernel\") pod \"cilium-vvpsb\" (UID: \"94c46ee7-7dae-4f1c-8312-664653502b0e\") " pod="kube-system/cilium-vvpsb" Feb 13 15:37:08.565862 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 45482 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:37:08.567579 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:08.575554 systemd-logind[1422]: New session 28 of user core. Feb 13 15:37:08.587120 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:37:08.712684 kubelet[2603]: E0213 15:37:08.712572 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.714159 containerd[1437]: time="2025-02-13T15:37:08.714116621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvpsb,Uid:94c46ee7-7dae-4f1c-8312-664653502b0e,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:08.747391 containerd[1437]: time="2025-02-13T15:37:08.747139813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:08.747391 containerd[1437]: time="2025-02-13T15:37:08.747310136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:08.747391 containerd[1437]: time="2025-02-13T15:37:08.747380337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:08.747552 containerd[1437]: time="2025-02-13T15:37:08.747502458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:08.766102 systemd[1]: Started cri-containerd-4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65.scope - libcontainer container 4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65. Feb 13 15:37:08.783696 containerd[1437]: time="2025-02-13T15:37:08.783642815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvpsb,Uid:94c46ee7-7dae-4f1c-8312-664653502b0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\"" Feb 13 15:37:08.784391 kubelet[2603]: E0213 15:37:08.784352 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.787429 containerd[1437]: time="2025-02-13T15:37:08.787359988Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:37:08.799778 containerd[1437]: time="2025-02-13T15:37:08.799719285Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b\"" Feb 13 15:37:08.800438 containerd[1437]: time="2025-02-13T15:37:08.800393694Z" level=info msg="StartContainer for \"df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b\"" Feb 13 15:37:08.831124 systemd[1]: Started cri-containerd-df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b.scope - libcontainer container df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b. Feb 13 15:37:08.852978 containerd[1437]: time="2025-02-13T15:37:08.852892645Z" level=info msg="StartContainer for \"df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b\" returns successfully" Feb 13 15:37:08.865107 systemd[1]: cri-containerd-df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b.scope: Deactivated successfully. Feb 13 15:37:08.885927 kubelet[2603]: E0213 15:37:08.885883 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.899080 containerd[1437]: time="2025-02-13T15:37:08.898847982Z" level=info msg="shim disconnected" id=df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b namespace=k8s.io Feb 13 15:37:08.899080 containerd[1437]: time="2025-02-13T15:37:08.898912303Z" level=warning msg="cleaning up after shim disconnected" id=df5eee79fd1ec603c0877b94ebbcdc7d89213f4cbda8d2b0ee97a2c5c48f0f6b namespace=k8s.io Feb 13 15:37:08.899080 containerd[1437]: time="2025-02-13T15:37:08.898930183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:09.889288 kubelet[2603]: E0213 15:37:09.889228 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:09.895593 containerd[1437]: time="2025-02-13T15:37:09.895284065Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:37:09.967186 containerd[1437]: time="2025-02-13T15:37:09.967058512Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801\"" Feb 13 15:37:09.967689 containerd[1437]: time="2025-02-13T15:37:09.967655640Z" level=info msg="StartContainer for \"0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801\"" Feb 13 15:37:10.003164 systemd[1]: Started cri-containerd-0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801.scope - libcontainer container 0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801. Feb 13 15:37:10.027788 containerd[1437]: time="2025-02-13T15:37:10.027628834Z" level=info msg="StartContainer for \"0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801\" returns successfully" Feb 13 15:37:10.033619 systemd[1]: cri-containerd-0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801.scope: Deactivated successfully. Feb 13 15:37:10.054143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801-rootfs.mount: Deactivated successfully. Feb 13 15:37:10.062655 containerd[1437]: time="2025-02-13T15:37:10.062595156Z" level=info msg="shim disconnected" id=0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801 namespace=k8s.io Feb 13 15:37:10.062655 containerd[1437]: time="2025-02-13T15:37:10.062651756Z" level=warning msg="cleaning up after shim disconnected" id=0281368bb6499fa852d13662984948ec9fba0cc109b83bf934e80a049c021801 namespace=k8s.io Feb 13 15:37:10.062655 containerd[1437]: time="2025-02-13T15:37:10.062660037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:10.724107 kubelet[2603]: E0213 15:37:10.724001 2603 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:37:10.895912 kubelet[2603]: E0213 15:37:10.895812 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:10.901884 containerd[1437]: time="2025-02-13T15:37:10.900775815Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:37:10.946112 containerd[1437]: time="2025-02-13T15:37:10.946052798Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca\"" Feb 13 15:37:10.946767 containerd[1437]: time="2025-02-13T15:37:10.946635686Z" level=info msg="StartContainer for \"0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca\"" Feb 13 15:37:10.990422 systemd[1]: Started cri-containerd-0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca.scope - libcontainer container 0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca. Feb 13 15:37:11.015817 containerd[1437]: time="2025-02-13T15:37:11.014105692Z" level=info msg="StartContainer for \"0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca\" returns successfully" Feb 13 15:37:11.018481 systemd[1]: cri-containerd-0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca.scope: Deactivated successfully. Feb 13 15:37:11.035757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca-rootfs.mount: Deactivated successfully. Feb 13 15:37:11.041930 containerd[1437]: time="2025-02-13T15:37:11.041846066Z" level=info msg="shim disconnected" id=0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca namespace=k8s.io Feb 13 15:37:11.041930 containerd[1437]: time="2025-02-13T15:37:11.041906107Z" level=warning msg="cleaning up after shim disconnected" id=0a283d3854defaec89f1b822b2465b9e678385146106621bf85f7e2c866dc8ca namespace=k8s.io Feb 13 15:37:11.041930 containerd[1437]: time="2025-02-13T15:37:11.041922508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:11.899411 kubelet[2603]: E0213 15:37:11.899370 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:11.903246 containerd[1437]: time="2025-02-13T15:37:11.903003226Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:37:11.927132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749294145.mount: Deactivated successfully. Feb 13 15:37:11.928145 containerd[1437]: time="2025-02-13T15:37:11.928107805Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6\"" Feb 13 15:37:11.928942 containerd[1437]: time="2025-02-13T15:37:11.928793094Z" level=info msg="StartContainer for \"a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6\"" Feb 13 15:37:11.964288 systemd[1]: Started cri-containerd-a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6.scope - libcontainer container a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6. Feb 13 15:37:11.984418 systemd[1]: cri-containerd-a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6.scope: Deactivated successfully. Feb 13 15:37:11.988753 containerd[1437]: time="2025-02-13T15:37:11.988281578Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c46ee7_7dae_4f1c_8312_664653502b0e.slice/cri-containerd-a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6.scope/memory.events\": no such file or directory" Feb 13 15:37:11.989969 containerd[1437]: time="2025-02-13T15:37:11.989872040Z" level=info msg="StartContainer for \"a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6\" returns successfully" Feb 13 15:37:12.002397 kubelet[2603]: I0213 15:37:12.002332 2603 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:37:12Z","lastTransitionTime":"2025-02-13T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:37:12.009966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6-rootfs.mount: Deactivated successfully. Feb 13 15:37:12.017604 containerd[1437]: time="2025-02-13T15:37:12.017520489Z" level=info msg="shim disconnected" id=a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6 namespace=k8s.io Feb 13 15:37:12.017604 containerd[1437]: time="2025-02-13T15:37:12.017580050Z" level=warning msg="cleaning up after shim disconnected" id=a321ba4d67f072be7d7a4325e2d6bd87bff6818f1e0e4f3b227d986b189767a6 namespace=k8s.io Feb 13 15:37:12.017604 containerd[1437]: time="2025-02-13T15:37:12.017590090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:12.906080 kubelet[2603]: E0213 15:37:12.905678 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:12.911081 containerd[1437]: time="2025-02-13T15:37:12.910977948Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:37:12.936319 containerd[1437]: time="2025-02-13T15:37:12.936276483Z" level=info msg="CreateContainer within sandbox \"4fa49af2e8d49c6966ff52edeb541ef45bbf2281f43e8209a9b8014fd9cd8a65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487\"" Feb 13 15:37:12.937580 containerd[1437]: time="2025-02-13T15:37:12.936756730Z" level=info msg="StartContainer for \"f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487\"" Feb 13 15:37:12.954662 systemd[1]: run-containerd-runc-k8s.io-f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487-runc.VzP0bn.mount: Deactivated successfully. Feb 13 15:37:12.968066 systemd[1]: Started cri-containerd-f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487.scope - libcontainer container f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487. Feb 13 15:37:12.997859 containerd[1437]: time="2025-02-13T15:37:12.997806500Z" level=info msg="StartContainer for \"f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487\" returns successfully" Feb 13 15:37:13.247959 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:37:13.909758 kubelet[2603]: E0213 15:37:13.909452 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:14.911135 kubelet[2603]: E0213 15:37:14.911095 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:16.079695 systemd-networkd[1375]: lxc_health: Link UP Feb 13 15:37:16.088596 systemd-networkd[1375]: lxc_health: Gained carrier Feb 13 15:37:16.715020 kubelet[2603]: E0213 15:37:16.714937 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:16.736259 kubelet[2603]: I0213 15:37:16.736169 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvpsb" podStartSLOduration=8.73610655 podStartE2EDuration="8.73610655s" podCreationTimestamp="2025-02-13 15:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:13.924237738 +0000 UTC m=+93.345820663" watchObservedRunningTime="2025-02-13 15:37:16.73610655 +0000 UTC m=+96.157689475" Feb 13 15:37:16.920623 kubelet[2603]: E0213 15:37:16.920489 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:17.834390 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 13 15:37:19.217148 systemd[1]: run-containerd-runc-k8s.io-f9992b1131b9d3a1bee5eb8549b5b0ce42e924c03ea84961cffaad917918e487-runc.mJM3QK.mount: Deactivated successfully. Feb 13 15:37:21.376908 sshd[4454]: Connection closed by 10.0.0.1 port 45482 Feb 13 15:37:21.376691 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:21.379187 systemd[1]: sshd@27-10.0.0.102:22-10.0.0.1:45482.service: Deactivated successfully. Feb 13 15:37:21.381309 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:37:21.382813 systemd-logind[1422]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:37:21.383889 systemd-logind[1422]: Removed session 28.