Feb 13 15:36:13.905735 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:36:13.905765 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:36:13.905780 kernel: KASLR enabled Feb 13 15:36:13.905789 kernel: efi: EFI v2.7 by EDK II Feb 13 15:36:13.905798 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:36:13.905804 kernel: random: crng init done Feb 13 15:36:13.905811 kernel: secureboot: Secure boot disabled Feb 13 15:36:13.905818 kernel: ACPI: Early table checksum verification disabled Feb 13 15:36:13.905824 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:36:13.905833 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:36:13.905839 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905846 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905852 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905858 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905866 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905874 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905881 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905888 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905894 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:36:13.905901 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:36:13.905908 kernel: NUMA: Failed to initialise from firmware Feb 13 15:36:13.905915 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:36:13.905921 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:36:13.905928 kernel: Zone ranges: Feb 13 15:36:13.905935 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:36:13.905943 kernel: DMA32 empty Feb 13 15:36:13.905949 kernel: Normal empty Feb 13 15:36:13.905956 kernel: Movable zone start for each node Feb 13 15:36:13.905963 kernel: Early memory node ranges Feb 13 15:36:13.905977 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:36:13.905989 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:36:13.905996 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:36:13.906002 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:36:13.906009 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:36:13.906016 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:36:13.906022 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:36:13.906029 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:36:13.906037 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:36:13.906044 kernel: psci: probing for conduit method from ACPI. Feb 13 15:36:13.906051 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:36:13.906060 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:36:13.906068 kernel: psci: Trusted OS migration not required Feb 13 15:36:13.906075 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:36:13.906084 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:36:13.906092 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:36:13.906099 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:36:13.906106 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:36:13.906113 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:36:13.906120 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:36:13.906127 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:36:13.906135 kernel: CPU features: detected: Spectre-v4 Feb 13 15:36:13.906141 kernel: CPU features: detected: Spectre-BHB Feb 13 15:36:13.906149 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:36:13.906158 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:36:13.906165 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:36:13.906172 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:36:13.906179 kernel: alternatives: applying boot alternatives Feb 13 15:36:13.906187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:36:13.906194 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:36:13.906201 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:36:13.906209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:36:13.906216 kernel: Fallback order for Node 0: 0 Feb 13 15:36:13.906223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:36:13.906230 kernel: Policy zone: DMA Feb 13 15:36:13.906239 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:36:13.906255 kernel: software IO TLB: area num 4. Feb 13 15:36:13.906262 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:36:13.906270 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Feb 13 15:36:13.906278 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:36:13.906285 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:36:13.906293 kernel: rcu: RCU event tracing is enabled. Feb 13 15:36:13.906301 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:36:13.906309 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:36:13.906316 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:36:13.906323 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:36:13.906330 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:36:13.906340 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:36:13.906347 kernel: GICv3: 256 SPIs implemented Feb 13 15:36:13.906355 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:36:13.906362 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:36:13.906369 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:36:13.906376 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:36:13.906383 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:36:13.906390 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:36:13.906398 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:36:13.906405 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:36:13.906412 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:36:13.906421 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:36:13.906428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:36:13.906435 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:36:13.906443 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:36:13.906450 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:36:13.906457 kernel: arm-pv: using stolen time PV Feb 13 15:36:13.906464 kernel: Console: colour dummy device 80x25 Feb 13 15:36:13.906472 kernel: ACPI: Core revision 20230628 Feb 13 15:36:13.906479 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:36:13.906487 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:36:13.906496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:36:13.906503 kernel: landlock: Up and running. Feb 13 15:36:13.906511 kernel: SELinux: Initializing. Feb 13 15:36:13.906518 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:36:13.906526 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:36:13.906533 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:36:13.906541 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:36:13.906548 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:36:13.906556 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:36:13.906563 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:36:13.906572 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:36:13.906579 kernel: Remapping and enabling EFI services. Feb 13 15:36:13.906587 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:36:13.906594 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:36:13.906602 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:36:13.906610 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:36:13.906618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:36:13.906625 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:36:13.906633 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:36:13.906642 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:36:13.906650 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:36:13.906663 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:36:13.906672 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:36:13.906680 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:36:13.906688 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:36:13.906696 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:36:13.906704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:36:13.906711 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:36:13.906720 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:36:13.906734 kernel: SMP: Total of 4 processors activated. Feb 13 15:36:13.906742 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:36:13.906749 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:36:13.906757 kernel: CPU features: detected: Common not Private translations Feb 13 15:36:13.906765 kernel: CPU features: detected: CRC32 instructions Feb 13 15:36:13.906773 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:36:13.906781 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:36:13.906791 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:36:13.906799 kernel: CPU features: detected: Privileged Access Never Feb 13 15:36:13.906807 kernel: CPU features: detected: RAS Extension Support Feb 13 15:36:13.906814 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:36:13.906822 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:36:13.906830 kernel: alternatives: applying system-wide alternatives Feb 13 15:36:13.906838 kernel: devtmpfs: initialized Feb 13 15:36:13.906846 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:36:13.906853 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:36:13.906863 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:36:13.906871 kernel: SMBIOS 3.0.0 present. Feb 13 15:36:13.906879 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:36:13.906887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:36:13.906895 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:36:13.906903 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:36:13.906911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:36:13.906919 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:36:13.906927 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:36:13.906936 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:36:13.906944 kernel: cpuidle: using governor menu Feb 13 15:36:13.906952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:36:13.906961 kernel: ASID allocator initialised with 32768 entries Feb 13 15:36:13.906968 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:36:13.906976 kernel: Serial: AMBA PL011 UART driver Feb 13 15:36:13.906984 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:36:13.906992 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:36:13.907000 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:36:13.907010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:36:13.907018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:36:13.907026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:36:13.907034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:36:13.907042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:36:13.907050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:36:13.907058 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:36:13.907066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:36:13.907074 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:36:13.907083 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:36:13.907091 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:36:13.907099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:36:13.907107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:36:13.907115 kernel: ACPI: Interpreter enabled Feb 13 15:36:13.907122 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:36:13.907130 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:36:13.907138 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:36:13.907146 kernel: printk: console [ttyAMA0] enabled Feb 13 15:36:13.907155 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:36:13.907314 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:36:13.907394 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:36:13.907461 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:36:13.907526 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:36:13.907595 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:36:13.907605 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:36:13.907617 kernel: PCI host bridge to bus 0000:00 Feb 13 15:36:13.907690 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:36:13.907761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:36:13.907842 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:36:13.907901 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:36:13.907984 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:36:13.908062 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:36:13.908133 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:36:13.908199 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:36:13.908312 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:36:13.908381 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:36:13.908446 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:36:13.908512 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:36:13.908571 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:36:13.908656 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:36:13.908736 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:36:13.908747 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:36:13.908755 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:36:13.908764 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:36:13.908772 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:36:13.908780 kernel: iommu: Default domain type: Translated Feb 13 15:36:13.908789 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:36:13.908800 kernel: efivars: Registered efivars operations Feb 13 15:36:13.908808 kernel: vgaarb: loaded Feb 13 15:36:13.908816 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:36:13.908824 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:36:13.908832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:36:13.908840 kernel: pnp: PnP ACPI init Feb 13 15:36:13.908923 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:36:13.908934 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:36:13.908944 kernel: NET: Registered PF_INET protocol family Feb 13 15:36:13.908953 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:36:13.908961 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:36:13.908969 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:36:13.908977 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:36:13.908985 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:36:13.908993 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:36:13.909001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:36:13.909009 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:36:13.909019 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:36:13.909027 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:36:13.909036 kernel: kvm [1]: HYP mode not available Feb 13 15:36:13.909043 kernel: Initialise system trusted keyrings Feb 13 15:36:13.909052 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:36:13.909060 kernel: Key type asymmetric registered Feb 13 15:36:13.909068 kernel: Asymmetric key parser 'x509' registered Feb 13 15:36:13.909076 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:36:13.909083 kernel: io scheduler mq-deadline registered Feb 13 15:36:13.909093 kernel: io scheduler kyber registered Feb 13 15:36:13.909101 kernel: io scheduler bfq registered Feb 13 15:36:13.909109 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:36:13.909117 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:36:13.909125 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:36:13.909197 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:36:13.909208 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:36:13.909216 kernel: thunder_xcv, ver 1.0 Feb 13 15:36:13.909224 kernel: thunder_bgx, ver 1.0 Feb 13 15:36:13.909234 kernel: nicpf, ver 1.0 Feb 13 15:36:13.909251 kernel: nicvf, ver 1.0 Feb 13 15:36:13.909334 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:36:13.909404 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:36:13 UTC (1739460973) Feb 13 15:36:13.909415 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:36:13.909423 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:36:13.909431 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:36:13.909439 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:36:13.909450 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:36:13.909458 kernel: Segment Routing with IPv6 Feb 13 15:36:13.909466 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:36:13.909474 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:36:13.909482 kernel: Key type dns_resolver registered Feb 13 15:36:13.909490 kernel: registered taskstats version 1 Feb 13 15:36:13.909497 kernel: Loading compiled-in X.509 certificates Feb 13 15:36:13.909506 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:36:13.909513 kernel: Key type .fscrypt registered Feb 13 15:36:13.909523 kernel: Key type fscrypt-provisioning registered Feb 13 15:36:13.909531 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:36:13.909539 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:36:13.909547 kernel: ima: No architecture policies found Feb 13 15:36:13.909555 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:36:13.909563 kernel: clk: Disabling unused clocks Feb 13 15:36:13.909571 kernel: Freeing unused kernel memory: 39680K Feb 13 15:36:13.909579 kernel: Run /init as init process Feb 13 15:36:13.909587 kernel: with arguments: Feb 13 15:36:13.909596 kernel: /init Feb 13 15:36:13.909604 kernel: with environment: Feb 13 15:36:13.909612 kernel: HOME=/ Feb 13 15:36:13.909620 kernel: TERM=linux Feb 13 15:36:13.909628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:36:13.909637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:36:13.909647 systemd[1]: Detected virtualization kvm. Feb 13 15:36:13.909656 systemd[1]: Detected architecture arm64. Feb 13 15:36:13.909666 systemd[1]: Running in initrd. Feb 13 15:36:13.909675 systemd[1]: No hostname configured, using default hostname. Feb 13 15:36:13.909683 systemd[1]: Hostname set to . Feb 13 15:36:13.909692 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:36:13.909700 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:36:13.909708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:36:13.909717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:36:13.909733 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:36:13.909744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:36:13.909752 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:36:13.909761 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:36:13.909771 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:36:13.909779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:36:13.909788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:36:13.909798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:36:13.909806 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:36:13.909815 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:36:13.909823 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:36:13.909832 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:36:13.909840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:36:13.909849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:36:13.909858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:36:13.909866 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:36:13.909876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:36:13.909885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:36:13.909894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:36:13.909902 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:36:13.909911 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:36:13.909919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:36:13.909928 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:36:13.909936 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:36:13.909944 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:36:13.909954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:36:13.909963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:13.909971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:36:13.909980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:36:13.909988 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:36:13.909999 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:36:13.910008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:13.910017 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:13.910026 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:36:13.910035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:36:13.910061 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:36:13.910083 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:36:13.910092 systemd-journald[239]: Journal started Feb 13 15:36:13.910111 systemd-journald[239]: Runtime Journal (/run/log/journal/2a70478bcba0489a873e12a794291158) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:36:13.887397 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:36:13.911317 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:36:13.913504 kernel: Bridge firewalling registered Feb 13 15:36:13.913522 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:36:13.913738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:36:13.914928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:13.916450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:36:13.935443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:36:13.936911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:36:13.938550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:36:13.949678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:13.954025 dracut-cmdline[266]: dracut-dracut-053 Feb 13 15:36:13.954025 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:36:13.954056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:36:13.963439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:36:13.989050 systemd-resolved[294]: Positive Trust Anchors: Feb 13 15:36:13.989128 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:36:13.989159 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:36:13.994060 systemd-resolved[294]: Defaulting to hostname 'linux'. Feb 13 15:36:13.995473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:36:13.996537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:36:14.038285 kernel: SCSI subsystem initialized Feb 13 15:36:14.043294 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:36:14.053270 kernel: iscsi: registered transport (tcp) Feb 13 15:36:14.067289 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:36:14.067346 kernel: QLogic iSCSI HBA Driver Feb 13 15:36:14.122697 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:36:14.133515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:36:14.153767 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:36:14.153830 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:36:14.155265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:36:14.202274 kernel: raid6: neonx8 gen() 15770 MB/s Feb 13 15:36:14.219269 kernel: raid6: neonx4 gen() 15632 MB/s Feb 13 15:36:14.236262 kernel: raid6: neonx2 gen() 13166 MB/s Feb 13 15:36:14.253265 kernel: raid6: neonx1 gen() 10395 MB/s Feb 13 15:36:14.270262 kernel: raid6: int64x8 gen() 6909 MB/s Feb 13 15:36:14.287275 kernel: raid6: int64x4 gen() 7316 MB/s Feb 13 15:36:14.304271 kernel: raid6: int64x2 gen() 6118 MB/s Feb 13 15:36:14.321293 kernel: raid6: int64x1 gen() 5041 MB/s Feb 13 15:36:14.321358 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Feb 13 15:36:14.338270 kernel: raid6: .... xor() 11873 MB/s, rmw enabled Feb 13 15:36:14.338285 kernel: raid6: using neon recovery algorithm Feb 13 15:36:14.344483 kernel: xor: measuring software checksum speed Feb 13 15:36:14.344515 kernel: 8regs : 19812 MB/sec Feb 13 15:36:14.344525 kernel: 32regs : 19679 MB/sec Feb 13 15:36:14.345514 kernel: arm64_neon : 27052 MB/sec Feb 13 15:36:14.345530 kernel: xor: using function: arm64_neon (27052 MB/sec) Feb 13 15:36:14.398024 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:36:14.410835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:36:14.424446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:36:14.436112 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 15:36:14.439365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:36:14.443431 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:36:14.460670 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Feb 13 15:36:14.491990 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:36:14.503418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:36:14.543166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:36:14.556404 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:36:14.571359 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:36:14.572923 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:36:14.574654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:36:14.576903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:36:14.594063 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:36:14.597477 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:36:14.604114 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:36:14.604224 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:36:14.604236 kernel: GPT:9289727 != 19775487 Feb 13 15:36:14.604259 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:36:14.604270 kernel: GPT:9289727 != 19775487 Feb 13 15:36:14.604281 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:36:14.604291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:36:14.601691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:36:14.601820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:14.603004 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:14.604844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:36:14.605128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:14.606905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:14.609148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:14.611122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:36:14.625268 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) Feb 13 15:36:14.627833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:36:14.630521 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) Feb 13 15:36:14.630267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:14.637572 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:36:14.643746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:36:14.644703 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:36:14.649538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:36:14.656537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:36:14.658841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:14.662905 disk-uuid[551]: Primary Header is updated. Feb 13 15:36:14.662905 disk-uuid[551]: Secondary Entries is updated. Feb 13 15:36:14.662905 disk-uuid[551]: Secondary Header is updated. Feb 13 15:36:14.668259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:36:14.684561 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:15.675258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:36:15.675914 disk-uuid[553]: The operation has completed successfully. Feb 13 15:36:15.695053 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:36:15.695150 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:36:15.715414 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:36:15.718327 sh[574]: Success Feb 13 15:36:15.732282 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:36:15.762655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:36:15.773613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:36:15.775105 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:36:15.786041 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:36:15.786095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:36:15.786107 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:36:15.786117 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:36:15.786607 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:36:15.790116 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:36:15.791304 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:36:15.797432 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:36:15.798732 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:36:15.806271 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:36:15.806311 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:36:15.806322 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:36:15.808258 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:36:15.815588 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:36:15.816863 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:36:15.822278 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:36:15.827410 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:36:15.902146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:36:15.916447 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:36:15.927422 ignition[659]: Ignition 2.20.0 Feb 13 15:36:15.927433 ignition[659]: Stage: fetch-offline Feb 13 15:36:15.927471 ignition[659]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:15.927480 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:15.927635 ignition[659]: parsed url from cmdline: "" Feb 13 15:36:15.927639 ignition[659]: no config URL provided Feb 13 15:36:15.927643 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:36:15.927650 ignition[659]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:36:15.927678 ignition[659]: op(1): [started] loading QEMU firmware config module Feb 13 15:36:15.927682 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:36:15.935514 ignition[659]: op(1): [finished] loading QEMU firmware config module Feb 13 15:36:15.940202 systemd-networkd[768]: lo: Link UP Feb 13 15:36:15.940215 systemd-networkd[768]: lo: Gained carrier Feb 13 15:36:15.941004 systemd-networkd[768]: Enumeration completed Feb 13 15:36:15.941085 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:36:15.942732 systemd[1]: Reached target network.target - Network. Feb 13 15:36:15.944172 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:15.944176 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:36:15.944922 systemd-networkd[768]: eth0: Link UP Feb 13 15:36:15.944924 systemd-networkd[768]: eth0: Gained carrier Feb 13 15:36:15.944930 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:15.965316 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:36:15.984772 ignition[659]: parsing config with SHA512: ee7f8a39ead9046d1ee2d0eb2a05f58cbb3a1533b0a9d23cf407b99412736b11a7f96facef4296b426a7dc2dc727d0e5d72d05bc2a6dc7dd5c5887054d6bf112 Feb 13 15:36:15.991086 unknown[659]: fetched base config from "system" Feb 13 15:36:15.991096 unknown[659]: fetched user config from "qemu" Feb 13 15:36:15.991560 ignition[659]: fetch-offline: fetch-offline passed Feb 13 15:36:15.991631 ignition[659]: Ignition finished successfully Feb 13 15:36:15.993962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:36:15.995415 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:36:16.000415 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:36:16.012409 ignition[775]: Ignition 2.20.0 Feb 13 15:36:16.012420 ignition[775]: Stage: kargs Feb 13 15:36:16.012585 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:16.012595 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:16.013516 ignition[775]: kargs: kargs passed Feb 13 15:36:16.013560 ignition[775]: Ignition finished successfully Feb 13 15:36:16.015658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:36:16.031424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:36:16.041165 ignition[783]: Ignition 2.20.0 Feb 13 15:36:16.041177 ignition[783]: Stage: disks Feb 13 15:36:16.041368 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:16.041380 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:16.042233 ignition[783]: disks: disks passed Feb 13 15:36:16.042298 ignition[783]: Ignition finished successfully Feb 13 15:36:16.044303 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:36:16.045701 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:36:16.046999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:36:16.048799 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:36:16.050257 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:36:16.051646 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:36:16.067488 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:36:16.078085 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:36:16.081656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:36:16.084336 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:36:16.127273 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:36:16.127841 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:36:16.128927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:36:16.141338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:36:16.143471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:36:16.144343 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:36:16.144383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:36:16.144406 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:36:16.150195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:36:16.151871 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:36:16.154261 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Feb 13 15:36:16.156949 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:36:16.156975 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:36:16.156986 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:36:16.159264 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:36:16.161034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:36:16.194347 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:36:16.198621 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:36:16.202216 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:36:16.205116 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:36:16.278694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:36:16.291360 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:36:16.292767 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:36:16.298274 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:36:16.313165 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:36:16.314572 ignition[915]: INFO : Ignition 2.20.0 Feb 13 15:36:16.314572 ignition[915]: INFO : Stage: mount Feb 13 15:36:16.314572 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:16.314572 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:16.318096 ignition[915]: INFO : mount: mount passed Feb 13 15:36:16.318096 ignition[915]: INFO : Ignition finished successfully Feb 13 15:36:16.316493 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:36:16.326378 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:36:16.784772 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:36:16.794437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:36:16.800497 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 15:36:16.800532 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:36:16.800543 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:36:16.801669 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:36:16.804272 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:36:16.804875 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:36:16.821649 ignition[947]: INFO : Ignition 2.20.0 Feb 13 15:36:16.821649 ignition[947]: INFO : Stage: files Feb 13 15:36:16.822910 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:16.822910 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:16.822910 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:36:16.825358 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:36:16.825358 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:36:16.828289 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:36:16.829301 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:36:16.829301 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:36:16.828782 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 15:36:16.832114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:36:16.832114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:36:16.923603 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:36:17.192610 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 15:36:17.476135 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:36:17.476135 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:36:17.478901 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:36:17.728939 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:36:17.794708 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:36:17.796238 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:36:18.039935 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:36:18.273999 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:36:18.273999 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:36:18.276595 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:36:18.301106 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:36:18.304890 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:36:18.306006 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:36:18.306006 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:36:18.306006 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:36:18.306006 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:36:18.306006 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:36:18.306006 ignition[947]: INFO : files: files passed Feb 13 15:36:18.306006 ignition[947]: INFO : Ignition finished successfully Feb 13 15:36:18.306821 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:36:18.317430 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:36:18.319095 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:36:18.322549 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:36:18.322646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:36:18.328236 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:36:18.331604 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:36:18.331604 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:36:18.334087 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:36:18.334522 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:36:18.336631 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:36:18.344602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:36:18.364729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:36:18.364835 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:36:18.366457 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:36:18.367825 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:36:18.369122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:36:18.369864 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:36:18.385037 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:36:18.392573 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:36:18.404185 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:36:18.405835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:36:18.408377 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:36:18.409080 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:36:18.409194 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:36:18.410316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:36:18.411091 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:36:18.411841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:36:18.412687 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:36:18.419816 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:36:18.423075 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:36:18.424371 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:36:18.425887 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:36:18.427124 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:36:18.429186 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:36:18.430775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:36:18.430935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:36:18.434444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:36:18.435298 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:36:18.436718 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:36:18.437335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:36:18.438314 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:36:18.438431 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:36:18.440302 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:36:18.440414 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:36:18.441710 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:36:18.442818 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:36:18.447791 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:36:18.448959 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:36:18.450139 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:36:18.451450 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:36:18.451540 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:36:18.452630 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:36:18.452722 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:36:18.453992 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:36:18.454094 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:36:18.455747 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:36:18.455847 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:36:18.467482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:36:18.468822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:36:18.468945 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:36:18.474565 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:36:18.475300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:36:18.475427 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:36:18.479603 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 15:36:18.479603 ignition[1002]: INFO : Stage: umount Feb 13 15:36:18.479603 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:18.479603 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:36:18.476926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:36:18.483482 ignition[1002]: INFO : umount: umount passed Feb 13 15:36:18.483482 ignition[1002]: INFO : Ignition finished successfully Feb 13 15:36:18.477026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:36:18.481136 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:36:18.482274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:36:18.484618 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:36:18.484700 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:36:18.488145 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:36:18.489028 systemd[1]: Stopped target network.target - Network. Feb 13 15:36:18.490710 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:36:18.490793 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:36:18.493982 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:36:18.494037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:36:18.495192 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:36:18.495236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:36:18.496663 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:36:18.496718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:36:18.498229 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:36:18.499338 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:36:18.506031 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:36:18.506147 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:36:18.506320 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 15:36:18.508238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:36:18.508443 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:36:18.509622 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:36:18.509733 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:36:18.511053 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:36:18.511084 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:36:18.517339 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:36:18.518332 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:36:18.518395 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:36:18.519904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:36:18.519949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:18.521237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:36:18.521409 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:36:18.523051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:36:18.533577 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:36:18.534372 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:36:18.544071 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:36:18.544237 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:36:18.546545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:36:18.546592 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:36:18.547833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:36:18.547864 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:36:18.549188 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:36:18.549236 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:36:18.551228 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:36:18.551282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:36:18.553326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:36:18.553370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:18.575450 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:36:18.576295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:36:18.576359 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:36:18.577327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:36:18.577372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:18.578544 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:36:18.578635 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:36:18.580633 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:36:18.580744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:36:18.582185 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:36:18.583320 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:36:18.583391 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:36:18.585611 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:36:18.596121 systemd[1]: Switching root. Feb 13 15:36:18.618367 systemd-journald[239]: Journal stopped Feb 13 15:36:19.339671 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:36:19.339747 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:36:19.339761 kernel: SELinux: policy capability open_perms=1 Feb 13 15:36:19.339772 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:36:19.339782 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:36:19.339793 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:36:19.339804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:36:19.339814 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:36:19.339827 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:36:19.339837 kernel: audit: type=1403 audit(1739460978.782:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:36:19.339851 systemd[1]: Successfully loaded SELinux policy in 31.999ms. Feb 13 15:36:19.339873 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.525ms. Feb 13 15:36:19.339885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:36:19.339896 systemd[1]: Detected virtualization kvm. Feb 13 15:36:19.339907 systemd[1]: Detected architecture arm64. Feb 13 15:36:19.339918 systemd[1]: Detected first boot. Feb 13 15:36:19.339929 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:36:19.339943 zram_generator::config[1046]: No configuration found. Feb 13 15:36:19.339955 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:36:19.339966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:36:19.339977 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:36:19.339988 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:36:19.339999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:36:19.340011 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:36:19.340023 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:36:19.340036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:36:19.340047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:36:19.340059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:36:19.340070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:36:19.340081 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:36:19.340093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:36:19.340105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:36:19.340116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:36:19.340127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:36:19.340141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:36:19.340152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:36:19.340164 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:36:19.340175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:36:19.340186 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:36:19.340199 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:36:19.340217 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:36:19.340230 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:36:19.340250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:36:19.340265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:36:19.340286 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:36:19.340297 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:36:19.340308 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:36:19.340319 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:36:19.340330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:36:19.340343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:36:19.340354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:36:19.340368 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:36:19.340379 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:36:19.340390 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:36:19.340401 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:36:19.340413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:36:19.340426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:36:19.340437 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:36:19.340449 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:36:19.340462 systemd[1]: Reached target machines.target - Containers. Feb 13 15:36:19.340475 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:36:19.340488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:19.340499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:36:19.340511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:36:19.340528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:19.340544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:36:19.340555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:19.340566 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:36:19.340578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:19.340591 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:36:19.340601 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:36:19.340613 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:36:19.340625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:36:19.340636 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:36:19.340646 kernel: fuse: init (API version 7.39) Feb 13 15:36:19.340656 kernel: loop: module loaded Feb 13 15:36:19.340668 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:36:19.340679 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:36:19.340689 kernel: ACPI: bus type drm_connector registered Feb 13 15:36:19.340706 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:36:19.340719 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:36:19.340731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:36:19.340742 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:36:19.340753 systemd[1]: Stopped verity-setup.service. Feb 13 15:36:19.340764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:36:19.340776 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:36:19.340790 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:36:19.340801 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:36:19.340812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:36:19.340823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:36:19.340856 systemd-journald[1106]: Collecting audit messages is disabled. Feb 13 15:36:19.340880 systemd-journald[1106]: Journal started Feb 13 15:36:19.340902 systemd-journald[1106]: Runtime Journal (/run/log/journal/2a70478bcba0489a873e12a794291158) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:36:19.149001 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:36:19.161316 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:36:19.161679 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:36:19.342938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:36:19.344924 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:36:19.345752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:36:19.345895 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:36:19.347047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:19.347171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:19.349618 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:36:19.350709 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:36:19.350841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:36:19.352067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:19.352214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:19.353525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:36:19.353655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:36:19.354770 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:19.354902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:19.356005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:36:19.357116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:36:19.358510 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:36:19.370847 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:36:19.380352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:36:19.382295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:36:19.383132 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:36:19.383178 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:36:19.385077 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:36:19.387043 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:36:19.388934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:36:19.389867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:19.391237 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:36:19.393109 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:36:19.394217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:36:19.397418 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:36:19.399516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:36:19.400466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:36:19.401178 systemd-journald[1106]: Time spent on flushing to /var/log/journal/2a70478bcba0489a873e12a794291158 is 17.123ms for 857 entries. Feb 13 15:36:19.401178 systemd-journald[1106]: System Journal (/var/log/journal/2a70478bcba0489a873e12a794291158) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:36:19.425517 systemd-journald[1106]: Received client request to flush runtime journal. Feb 13 15:36:19.425577 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 15:36:19.404466 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:36:19.410418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:36:19.412853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:36:19.414149 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:36:19.421312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:36:19.422970 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:36:19.425274 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:36:19.429726 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:36:19.451259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:36:19.451951 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:36:19.455452 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:36:19.457033 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:36:19.461998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:19.470865 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:36:19.472759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:36:19.475553 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:36:19.477241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:36:19.490461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:36:19.493273 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 15:36:19.516005 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:36:19.516024 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:36:19.520501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:36:19.530505 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:36:19.562273 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 15:36:19.567262 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 15:36:19.573276 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 15:36:19.576341 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:36:19.576754 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 15:36:19.582345 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:36:19.582365 systemd[1]: Reloading... Feb 13 15:36:19.640276 zram_generator::config[1215]: No configuration found. Feb 13 15:36:19.695706 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:36:19.735320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:19.770944 systemd[1]: Reloading finished in 188 ms. Feb 13 15:36:19.803675 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:36:19.805350 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:36:19.816417 systemd[1]: Starting ensure-sysext.service... Feb 13 15:36:19.818176 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:36:19.827354 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:36:19.827373 systemd[1]: Reloading... Feb 13 15:36:19.835720 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:36:19.836333 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:36:19.837093 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:36:19.837428 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 15:36:19.837538 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 15:36:19.839762 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:36:19.839869 systemd-tmpfiles[1244]: Skipping /boot Feb 13 15:36:19.846788 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:36:19.846879 systemd-tmpfiles[1244]: Skipping /boot Feb 13 15:36:19.874274 zram_generator::config[1271]: No configuration found. Feb 13 15:36:19.955296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:19.990914 systemd[1]: Reloading finished in 163 ms. Feb 13 15:36:20.006467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:36:20.007710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:36:20.024611 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:36:20.026974 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:36:20.029010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:36:20.032458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:36:20.035478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:36:20.041571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:36:20.054212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:36:20.057842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:20.060937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:20.063096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:20.068369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:20.069382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:20.071228 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 15:36:20.071594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:20.071732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:20.075060 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:20.075203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:20.076887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:36:20.080403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:20.082105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:20.083171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:20.084556 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:36:20.086011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:36:20.087409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:20.087534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:20.093040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:20.097509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:36:20.105413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:20.108563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:20.110790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:20.111405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:36:20.112995 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:36:20.115676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:20.115826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:20.117142 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:36:20.117436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:36:20.119518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:36:20.127971 systemd[1]: Finished ensure-sysext.service. Feb 13 15:36:20.129999 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:36:20.148978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:20.149154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:20.150501 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:20.150639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:20.152044 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:36:20.162412 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:36:20.163150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:36:20.163216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:36:20.165530 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:36:20.168408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:36:20.176463 augenrules[1377]: No rules Feb 13 15:36:20.178173 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:36:20.178375 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:36:20.188375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1351) Feb 13 15:36:20.227212 systemd-networkd[1375]: lo: Link UP Feb 13 15:36:20.227221 systemd-networkd[1375]: lo: Gained carrier Feb 13 15:36:20.228013 systemd-networkd[1375]: Enumeration completed Feb 13 15:36:20.228124 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:36:20.230240 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:20.230251 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:36:20.230836 systemd-networkd[1375]: eth0: Link UP Feb 13 15:36:20.230846 systemd-networkd[1375]: eth0: Gained carrier Feb 13 15:36:20.230858 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:20.238457 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:36:20.240680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:36:20.248343 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:36:20.252538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:36:20.262779 systemd-resolved[1310]: Positive Trust Anchors: Feb 13 15:36:20.262948 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:36:20.262980 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:36:20.264496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:20.270840 systemd-resolved[1310]: Defaulting to hostname 'linux'. Feb 13 15:36:20.273712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:36:20.274921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:36:20.275983 systemd[1]: Reached target network.target - Network. Feb 13 15:36:20.278376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:36:20.281380 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:36:20.282364 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:36:20.284106 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:36:20.284525 systemd-timesyncd[1379]: Initial clock synchronization to Thu 2025-02-13 15:36:20.082854 UTC. Feb 13 15:36:20.285734 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:36:20.293562 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:36:20.311860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:20.317182 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:36:20.351968 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:36:20.353217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:36:20.354057 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:36:20.354953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:36:20.355913 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:36:20.357045 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:36:20.357975 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:36:20.358924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:36:20.360002 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:36:20.360036 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:36:20.360715 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:36:20.361954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:36:20.364188 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:36:20.376338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:36:20.378432 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:36:20.379704 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:36:20.380590 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:36:20.381319 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:36:20.382029 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:36:20.382060 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:36:20.383098 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:36:20.384935 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:36:20.386576 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:36:20.388417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:36:20.391786 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:36:20.396158 jq[1414]: false Feb 13 15:36:20.396532 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:36:20.397620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:36:20.399377 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:36:20.403438 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:36:20.405324 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:36:20.412421 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:36:20.413206 extend-filesystems[1415]: Found loop3 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found loop4 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found loop5 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda1 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda2 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda3 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found usr Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda4 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda6 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda7 Feb 13 15:36:20.413206 extend-filesystems[1415]: Found vda9 Feb 13 15:36:20.413206 extend-filesystems[1415]: Checking size of /dev/vda9 Feb 13 15:36:20.415151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:36:20.415556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:36:20.416179 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:36:20.418911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:36:20.421067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:36:20.427468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:36:20.427625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:36:20.430561 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:36:20.430714 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:36:20.434601 jq[1428]: true Feb 13 15:36:20.440394 dbus-daemon[1413]: [system] SELinux support is enabled Feb 13 15:36:20.441586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:36:20.450546 extend-filesystems[1415]: Resized partition /dev/vda9 Feb 13 15:36:20.451317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:36:20.451344 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:36:20.457523 tar[1432]: linux-arm64/helm Feb 13 15:36:20.456415 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:36:20.457809 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:36:20.463310 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1354) Feb 13 15:36:20.456432 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:36:20.463423 jq[1441]: true Feb 13 15:36:20.460480 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:36:20.469956 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:36:20.470025 update_engine[1425]: I20250213 15:36:20.469485 1425 main.cc:92] Flatcar Update Engine starting Feb 13 15:36:20.466486 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:36:20.466671 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:36:20.472346 update_engine[1425]: I20250213 15:36:20.472298 1425 update_check_scheduler.cc:74] Next update check in 10m56s Feb 13 15:36:20.474345 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:36:20.484450 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:36:20.497275 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:36:20.508048 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:36:20.508347 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:36:20.508347 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:36:20.508347 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:36:20.516187 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Feb 13 15:36:20.508597 systemd-logind[1423]: New seat seat0. Feb 13 15:36:20.509267 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:36:20.510488 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:36:20.513310 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:36:20.589736 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:36:20.675677 containerd[1445]: time="2025-02-13T15:36:20.675594360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:36:20.680847 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:36:20.684289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:36:20.685781 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:36:20.706749 containerd[1445]: time="2025-02-13T15:36:20.706703160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.712469 containerd[1445]: time="2025-02-13T15:36:20.712432600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:20.712469 containerd[1445]: time="2025-02-13T15:36:20.712469880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:36:20.712523 containerd[1445]: time="2025-02-13T15:36:20.712486720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:36:20.712666 containerd[1445]: time="2025-02-13T15:36:20.712644360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:36:20.712770 containerd[1445]: time="2025-02-13T15:36:20.712753520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.712835 containerd[1445]: time="2025-02-13T15:36:20.712817280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:20.712835 containerd[1445]: time="2025-02-13T15:36:20.712833280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713008 containerd[1445]: time="2025-02-13T15:36:20.712987280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713008 containerd[1445]: time="2025-02-13T15:36:20.713006760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713053 containerd[1445]: time="2025-02-13T15:36:20.713019880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713053 containerd[1445]: time="2025-02-13T15:36:20.713028680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713111 containerd[1445]: time="2025-02-13T15:36:20.713095240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713315 containerd[1445]: time="2025-02-13T15:36:20.713296000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713415 containerd[1445]: time="2025-02-13T15:36:20.713397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:20.713415 containerd[1445]: time="2025-02-13T15:36:20.713413320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:36:20.713501 containerd[1445]: time="2025-02-13T15:36:20.713485440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:36:20.713539 containerd[1445]: time="2025-02-13T15:36:20.713529720Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:36:20.720498 containerd[1445]: time="2025-02-13T15:36:20.720471280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:36:20.720547 containerd[1445]: time="2025-02-13T15:36:20.720530840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:36:20.720573 containerd[1445]: time="2025-02-13T15:36:20.720551960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:36:20.720573 containerd[1445]: time="2025-02-13T15:36:20.720568400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:36:20.720611 containerd[1445]: time="2025-02-13T15:36:20.720581480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:36:20.720750 containerd[1445]: time="2025-02-13T15:36:20.720729040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:36:20.720976 containerd[1445]: time="2025-02-13T15:36:20.720958800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:36:20.721077 containerd[1445]: time="2025-02-13T15:36:20.721060040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:36:20.721104 containerd[1445]: time="2025-02-13T15:36:20.721079240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:36:20.721104 containerd[1445]: time="2025-02-13T15:36:20.721093200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:36:20.721141 containerd[1445]: time="2025-02-13T15:36:20.721107000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721141 containerd[1445]: time="2025-02-13T15:36:20.721126720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721184 containerd[1445]: time="2025-02-13T15:36:20.721139480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721184 containerd[1445]: time="2025-02-13T15:36:20.721152720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721184 containerd[1445]: time="2025-02-13T15:36:20.721166640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721184 containerd[1445]: time="2025-02-13T15:36:20.721179040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721272 containerd[1445]: time="2025-02-13T15:36:20.721191280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721272 containerd[1445]: time="2025-02-13T15:36:20.721202360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:36:20.721272 containerd[1445]: time="2025-02-13T15:36:20.721223360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721272 containerd[1445]: time="2025-02-13T15:36:20.721237240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721272 containerd[1445]: time="2025-02-13T15:36:20.721265520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721278800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721290840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721307640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721319400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721332520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721345240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721359360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721378 containerd[1445]: time="2025-02-13T15:36:20.721371760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721384480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721397280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721411320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721432840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721445640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.721521 containerd[1445]: time="2025-02-13T15:36:20.721455960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721632600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721717280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721729640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721741360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721762960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721775240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721784560Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:36:20.722256 containerd[1445]: time="2025-02-13T15:36:20.721794200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:36:20.722422 containerd[1445]: time="2025-02-13T15:36:20.722116720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:36:20.722422 containerd[1445]: time="2025-02-13T15:36:20.722161400Z" level=info msg="Connect containerd service" Feb 13 15:36:20.722422 containerd[1445]: time="2025-02-13T15:36:20.722187600Z" level=info msg="using legacy CRI server" Feb 13 15:36:20.722422 containerd[1445]: time="2025-02-13T15:36:20.722194520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:36:20.722583 containerd[1445]: time="2025-02-13T15:36:20.722434800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:36:20.723062 containerd[1445]: time="2025-02-13T15:36:20.723033360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723260960Z" level=info msg="Start subscribing containerd event" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723319160Z" level=info msg="Start recovering state" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723387360Z" level=info msg="Start event monitor" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723399720Z" level=info msg="Start snapshots syncer" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723408560Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723416880Z" level=info msg="Start streaming server" Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723928040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.723978400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:36:20.724248 containerd[1445]: time="2025-02-13T15:36:20.724034800Z" level=info msg="containerd successfully booted in 0.050843s" Feb 13 15:36:20.724170 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:36:20.831788 tar[1432]: linux-arm64/LICENSE Feb 13 15:36:20.831858 tar[1432]: linux-arm64/README.md Feb 13 15:36:20.847298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:36:21.736374 systemd-networkd[1375]: eth0: Gained IPv6LL Feb 13 15:36:21.742833 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:36:21.744158 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:36:21.757504 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:36:21.759605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:21.761361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:36:21.777274 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:36:21.779406 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:36:21.782589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:36:21.784295 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:36:21.850914 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:36:21.869209 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:36:21.883544 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:36:21.888239 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:36:21.888439 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:36:21.891345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:36:21.904646 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:36:21.907165 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:36:21.909012 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:36:21.910069 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:36:22.251857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:22.253073 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:36:22.253997 systemd[1]: Startup finished in 546ms (kernel) + 5.072s (initrd) + 3.505s (userspace) = 9.124s. Feb 13 15:36:22.255909 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:22.675156 kubelet[1527]: E0213 15:36:22.675013 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:22.677218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:22.677386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:26.193500 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:36:26.194747 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:60510.service - OpenSSH per-connection server daemon (10.0.0.1:60510). Feb 13 15:36:26.254160 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 60510 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:26.256289 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:26.271495 systemd-logind[1423]: New session 1 of user core. Feb 13 15:36:26.272565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:36:26.285536 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:36:26.296322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:36:26.298953 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:36:26.306366 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:36:26.391341 systemd[1546]: Queued start job for default target default.target. Feb 13 15:36:26.414325 systemd[1546]: Created slice app.slice - User Application Slice. Feb 13 15:36:26.414377 systemd[1546]: Reached target paths.target - Paths. Feb 13 15:36:26.414391 systemd[1546]: Reached target timers.target - Timers. Feb 13 15:36:26.415797 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:36:26.427356 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:36:26.427483 systemd[1546]: Reached target sockets.target - Sockets. Feb 13 15:36:26.427503 systemd[1546]: Reached target basic.target - Basic System. Feb 13 15:36:26.427544 systemd[1546]: Reached target default.target - Main User Target. Feb 13 15:36:26.427572 systemd[1546]: Startup finished in 115ms. Feb 13 15:36:26.427759 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:36:26.429219 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:36:26.513624 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:60524.service - OpenSSH per-connection server daemon (10.0.0.1:60524). Feb 13 15:36:26.556835 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 60524 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:26.558300 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:26.562189 systemd-logind[1423]: New session 2 of user core. Feb 13 15:36:26.570432 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:36:26.621938 sshd[1559]: Connection closed by 10.0.0.1 port 60524 Feb 13 15:36:26.622612 sshd-session[1557]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:26.631853 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:60524.service: Deactivated successfully. Feb 13 15:36:26.633560 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:36:26.634889 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:36:26.643656 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). Feb 13 15:36:26.644592 systemd-logind[1423]: Removed session 2. Feb 13 15:36:26.686058 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:26.687399 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:26.691875 systemd-logind[1423]: New session 3 of user core. Feb 13 15:36:26.697428 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:36:26.745551 sshd[1566]: Connection closed by 10.0.0.1 port 60532 Feb 13 15:36:26.745973 sshd-session[1564]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:26.755831 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:60532.service: Deactivated successfully. Feb 13 15:36:26.758700 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:36:26.760022 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:36:26.761475 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Feb 13 15:36:26.762386 systemd-logind[1423]: Removed session 3. Feb 13 15:36:26.806774 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:26.808134 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:26.812326 systemd-logind[1423]: New session 4 of user core. Feb 13 15:36:26.826439 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:36:26.878327 sshd[1573]: Connection closed by 10.0.0.1 port 60544 Feb 13 15:36:26.878720 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:26.888799 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:60544.service: Deactivated successfully. Feb 13 15:36:26.892483 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:36:26.893846 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:36:26.901758 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). Feb 13 15:36:26.902862 systemd-logind[1423]: Removed session 4. Feb 13 15:36:26.950435 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:26.951744 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:26.956450 systemd-logind[1423]: New session 5 of user core. Feb 13 15:36:26.966472 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:36:27.033328 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:36:27.036495 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:27.057278 sudo[1581]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:27.059924 sshd[1580]: Connection closed by 10.0.0.1 port 60556 Feb 13 15:36:27.060309 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:27.076056 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:60556.service: Deactivated successfully. Feb 13 15:36:27.078737 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:36:27.081169 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:36:27.099582 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:60568.service - OpenSSH per-connection server daemon (10.0.0.1:60568). Feb 13 15:36:27.100846 systemd-logind[1423]: Removed session 5. Feb 13 15:36:27.141328 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 60568 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:27.142593 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:27.146901 systemd-logind[1423]: New session 6 of user core. Feb 13 15:36:27.156455 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:36:27.212761 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:36:27.213760 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:27.217943 sudo[1590]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:27.224788 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:36:27.225105 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:27.251603 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:36:27.278690 augenrules[1612]: No rules Feb 13 15:36:27.279951 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:36:27.281321 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:36:27.282632 sudo[1589]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:27.284375 sshd[1588]: Connection closed by 10.0.0.1 port 60568 Feb 13 15:36:27.285272 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:27.296156 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:60568.service: Deactivated successfully. Feb 13 15:36:27.297814 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:36:27.299410 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:36:27.300853 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:60578.service - OpenSSH per-connection server daemon (10.0.0.1:60578). Feb 13 15:36:27.301948 systemd-logind[1423]: Removed session 6. Feb 13 15:36:27.346754 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 60578 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:27.347950 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:27.352036 systemd-logind[1423]: New session 7 of user core. Feb 13 15:36:27.363436 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:36:27.416657 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:36:27.417704 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:27.737631 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:36:27.737727 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:36:27.981896 dockerd[1644]: time="2025-02-13T15:36:27.981830577Z" level=info msg="Starting up" Feb 13 15:36:28.126752 dockerd[1644]: time="2025-02-13T15:36:28.126636774Z" level=info msg="Loading containers: start." Feb 13 15:36:28.271917 kernel: Initializing XFRM netlink socket Feb 13 15:36:28.348510 systemd-networkd[1375]: docker0: Link UP Feb 13 15:36:28.378859 dockerd[1644]: time="2025-02-13T15:36:28.378718312Z" level=info msg="Loading containers: done." Feb 13 15:36:28.391025 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1604342826-merged.mount: Deactivated successfully. Feb 13 15:36:28.391973 dockerd[1644]: time="2025-02-13T15:36:28.391917239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:36:28.392112 dockerd[1644]: time="2025-02-13T15:36:28.392026747Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:36:28.392166 dockerd[1644]: time="2025-02-13T15:36:28.392142512Z" level=info msg="Daemon has completed initialization" Feb 13 15:36:28.428112 dockerd[1644]: time="2025-02-13T15:36:28.428031866Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:36:28.428307 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:36:28.999527 containerd[1445]: time="2025-02-13T15:36:28.999478084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:36:29.789568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196942858.mount: Deactivated successfully. Feb 13 15:36:30.612300 containerd[1445]: time="2025-02-13T15:36:30.612228160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:30.612715 containerd[1445]: time="2025-02-13T15:36:30.612648954Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 15:36:30.613646 containerd[1445]: time="2025-02-13T15:36:30.613584547Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:30.616655 containerd[1445]: time="2025-02-13T15:36:30.616584732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:30.618108 containerd[1445]: time="2025-02-13T15:36:30.617994553Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.618466335s" Feb 13 15:36:30.618108 containerd[1445]: time="2025-02-13T15:36:30.618039927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:36:30.619028 containerd[1445]: time="2025-02-13T15:36:30.618732094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:36:31.781184 containerd[1445]: time="2025-02-13T15:36:31.781129096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:31.782446 containerd[1445]: time="2025-02-13T15:36:31.782396576Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 15:36:31.783101 containerd[1445]: time="2025-02-13T15:36:31.782961057Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:31.786106 containerd[1445]: time="2025-02-13T15:36:31.786031570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:31.787287 containerd[1445]: time="2025-02-13T15:36:31.787105578Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.168333936s" Feb 13 15:36:31.787287 containerd[1445]: time="2025-02-13T15:36:31.787146068Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:36:31.787624 containerd[1445]: time="2025-02-13T15:36:31.787591462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:36:32.785507 containerd[1445]: time="2025-02-13T15:36:32.785440582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:32.786100 containerd[1445]: time="2025-02-13T15:36:32.786042915Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 15:36:32.787782 containerd[1445]: time="2025-02-13T15:36:32.787737410Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:32.790783 containerd[1445]: time="2025-02-13T15:36:32.790708870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:32.791798 containerd[1445]: time="2025-02-13T15:36:32.791756412Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.004128622s" Feb 13 15:36:32.791798 containerd[1445]: time="2025-02-13T15:36:32.791794351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:36:32.792310 containerd[1445]: time="2025-02-13T15:36:32.792284616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:36:32.927676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:36:32.940479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:33.036738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:33.041629 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:33.084035 kubelet[1911]: E0213 15:36:33.083977 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:33.086656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:33.086789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:33.784674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318956827.mount: Deactivated successfully. Feb 13 15:36:34.161878 containerd[1445]: time="2025-02-13T15:36:34.161740007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:34.162283 containerd[1445]: time="2025-02-13T15:36:34.162143929Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 15:36:34.163091 containerd[1445]: time="2025-02-13T15:36:34.163060051Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:34.165555 containerd[1445]: time="2025-02-13T15:36:34.165516476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:34.166263 containerd[1445]: time="2025-02-13T15:36:34.166221737Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.37390418s" Feb 13 15:36:34.166301 containerd[1445]: time="2025-02-13T15:36:34.166263272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:36:34.166715 containerd[1445]: time="2025-02-13T15:36:34.166690291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:36:34.720266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897662407.mount: Deactivated successfully. Feb 13 15:36:35.262722 containerd[1445]: time="2025-02-13T15:36:35.262651037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.263429 containerd[1445]: time="2025-02-13T15:36:35.263372867Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:36:35.264837 containerd[1445]: time="2025-02-13T15:36:35.264793537Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.268504 containerd[1445]: time="2025-02-13T15:36:35.267893031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.269242 containerd[1445]: time="2025-02-13T15:36:35.269187711Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.102464521s" Feb 13 15:36:35.269242 containerd[1445]: time="2025-02-13T15:36:35.269227596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:36:35.269852 containerd[1445]: time="2025-02-13T15:36:35.269809610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:36:35.685744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088398550.mount: Deactivated successfully. Feb 13 15:36:35.690637 containerd[1445]: time="2025-02-13T15:36:35.690579199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.692115 containerd[1445]: time="2025-02-13T15:36:35.692054336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 15:36:35.693184 containerd[1445]: time="2025-02-13T15:36:35.693144453Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.695470 containerd[1445]: time="2025-02-13T15:36:35.695427844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:35.696405 containerd[1445]: time="2025-02-13T15:36:35.696365554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 426.515461ms" Feb 13 15:36:35.696452 containerd[1445]: time="2025-02-13T15:36:35.696409184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:36:35.697069 containerd[1445]: time="2025-02-13T15:36:35.697045308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:36:36.217651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267852441.mount: Deactivated successfully. Feb 13 15:36:37.557582 containerd[1445]: time="2025-02-13T15:36:37.557518235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:37.558081 containerd[1445]: time="2025-02-13T15:36:37.558030989Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 15:36:37.559106 containerd[1445]: time="2025-02-13T15:36:37.559059207Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:37.562499 containerd[1445]: time="2025-02-13T15:36:37.562457288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:37.563985 containerd[1445]: time="2025-02-13T15:36:37.563929186Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.8668502s" Feb 13 15:36:37.564081 containerd[1445]: time="2025-02-13T15:36:37.563987931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:36:42.442179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:42.450475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:42.471420 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Feb 13 15:36:42.471437 systemd[1]: Reloading... Feb 13 15:36:42.537273 zram_generator::config[2099]: No configuration found. Feb 13 15:36:42.644980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:42.697493 systemd[1]: Reloading finished in 225 ms. Feb 13 15:36:42.742520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:42.745605 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:36:42.745828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:42.747414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:42.838869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:42.844183 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:36:42.882743 kubelet[2145]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:42.882743 kubelet[2145]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:36:42.882743 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:42.883279 kubelet[2145]: I0213 15:36:42.883211 2145 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:36:43.746346 kubelet[2145]: I0213 15:36:43.746300 2145 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:36:43.746346 kubelet[2145]: I0213 15:36:43.746335 2145 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:36:43.746594 kubelet[2145]: I0213 15:36:43.746568 2145 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:36:43.777492 kubelet[2145]: E0213 15:36:43.777456 2145 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:43.778668 kubelet[2145]: I0213 15:36:43.778614 2145 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:36:43.786709 kubelet[2145]: E0213 15:36:43.786668 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:36:43.786709 kubelet[2145]: I0213 15:36:43.786701 2145 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:36:43.790159 kubelet[2145]: I0213 15:36:43.790118 2145 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:36:43.790461 kubelet[2145]: I0213 15:36:43.790437 2145 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:36:43.790583 kubelet[2145]: I0213 15:36:43.790546 2145 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:36:43.790746 kubelet[2145]: I0213 15:36:43.790581 2145 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:36:43.790893 kubelet[2145]: I0213 15:36:43.790869 2145 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:36:43.790893 kubelet[2145]: I0213 15:36:43.790881 2145 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:36:43.791084 kubelet[2145]: I0213 15:36:43.791057 2145 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:43.792718 kubelet[2145]: I0213 15:36:43.792686 2145 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:36:43.792753 kubelet[2145]: I0213 15:36:43.792719 2145 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:36:43.792828 kubelet[2145]: I0213 15:36:43.792809 2145 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:36:43.792828 kubelet[2145]: I0213 15:36:43.792823 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:36:43.793883 kubelet[2145]: W0213 15:36:43.793779 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:43.793883 kubelet[2145]: W0213 15:36:43.793820 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:43.793883 kubelet[2145]: E0213 15:36:43.793853 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:43.793883 kubelet[2145]: E0213 15:36:43.793872 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:43.796640 kubelet[2145]: I0213 15:36:43.796605 2145 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:36:43.798410 kubelet[2145]: I0213 15:36:43.798384 2145 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:36:43.799081 kubelet[2145]: W0213 15:36:43.799051 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:36:43.800176 kubelet[2145]: I0213 15:36:43.799884 2145 server.go:1269] "Started kubelet" Feb 13 15:36:43.800784 kubelet[2145]: I0213 15:36:43.800730 2145 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:36:43.801646 kubelet[2145]: I0213 15:36:43.801059 2145 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:36:43.801646 kubelet[2145]: I0213 15:36:43.801085 2145 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:36:43.801646 kubelet[2145]: I0213 15:36:43.801342 2145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:36:43.802512 kubelet[2145]: I0213 15:36:43.802486 2145 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:36:43.802776 kubelet[2145]: I0213 15:36:43.802744 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:36:43.803969 kubelet[2145]: I0213 15:36:43.803859 2145 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:36:43.803969 kubelet[2145]: I0213 15:36:43.803961 2145 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:36:43.804069 kubelet[2145]: I0213 15:36:43.804016 2145 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:36:43.804368 kubelet[2145]: W0213 15:36:43.804327 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:43.804443 kubelet[2145]: E0213 15:36:43.804379 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:43.805313 kubelet[2145]: E0213 15:36:43.805119 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Feb 13 15:36:43.805313 kubelet[2145]: E0213 15:36:43.805287 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:43.805464 kubelet[2145]: E0213 15:36:43.803721 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce91202256c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:36:43.799844547 +0000 UTC m=+0.952297675,LastTimestamp:2025-02-13 15:36:43.799844547 +0000 UTC m=+0.952297675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:36:43.805825 kubelet[2145]: I0213 15:36:43.805588 2145 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:36:43.805989 kubelet[2145]: I0213 15:36:43.805968 2145 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:36:43.806343 kubelet[2145]: E0213 15:36:43.806285 2145 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:36:43.807839 kubelet[2145]: I0213 15:36:43.807813 2145 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:36:43.820475 kubelet[2145]: I0213 15:36:43.820326 2145 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:36:43.820475 kubelet[2145]: I0213 15:36:43.820350 2145 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:36:43.820475 kubelet[2145]: I0213 15:36:43.820332 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:36:43.820646 kubelet[2145]: I0213 15:36:43.820526 2145 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:43.821504 kubelet[2145]: I0213 15:36:43.821468 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:36:43.821504 kubelet[2145]: I0213 15:36:43.821494 2145 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:36:43.821943 kubelet[2145]: I0213 15:36:43.821513 2145 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:36:43.821943 kubelet[2145]: E0213 15:36:43.821559 2145 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:36:43.905414 kubelet[2145]: E0213 15:36:43.905373 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:43.906201 kubelet[2145]: W0213 15:36:43.906131 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:43.906201 kubelet[2145]: E0213 15:36:43.906175 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:43.908900 kubelet[2145]: I0213 15:36:43.908871 2145 policy_none.go:49] "None policy: Start" Feb 13 15:36:43.909659 kubelet[2145]: I0213 15:36:43.909597 2145 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:36:43.909659 kubelet[2145]: I0213 15:36:43.909625 2145 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:36:43.916831 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:36:43.921827 kubelet[2145]: E0213 15:36:43.921787 2145 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:36:43.927104 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:36:43.929710 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:36:43.937196 kubelet[2145]: I0213 15:36:43.937037 2145 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:36:43.937552 kubelet[2145]: I0213 15:36:43.937232 2145 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:36:43.937552 kubelet[2145]: I0213 15:36:43.937262 2145 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:36:43.937552 kubelet[2145]: I0213 15:36:43.937456 2145 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:36:43.939029 kubelet[2145]: E0213 15:36:43.938958 2145 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:36:44.007302 kubelet[2145]: E0213 15:36:44.006227 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Feb 13 15:36:44.038536 kubelet[2145]: I0213 15:36:44.038500 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:44.039124 kubelet[2145]: E0213 15:36:44.039083 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Feb 13 15:36:44.129909 systemd[1]: Created slice kubepods-burstable-pod39acaef37933d0af7e26f644287931a2.slice - libcontainer container kubepods-burstable-pod39acaef37933d0af7e26f644287931a2.slice. Feb 13 15:36:44.155677 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:36:44.160364 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:36:44.206265 kubelet[2145]: I0213 15:36:44.206217 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:44.206517 kubelet[2145]: I0213 15:36:44.206435 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:44.206517 kubelet[2145]: I0213 15:36:44.206466 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:44.206517 kubelet[2145]: I0213 15:36:44.206484 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:44.206517 kubelet[2145]: I0213 15:36:44.206501 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:44.206638 kubelet[2145]: I0213 15:36:44.206542 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:44.206638 kubelet[2145]: I0213 15:36:44.206586 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:44.206878 kubelet[2145]: I0213 15:36:44.206612 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:44.206918 kubelet[2145]: I0213 15:36:44.206883 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:36:44.240248 kubelet[2145]: I0213 15:36:44.240226 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:44.240631 kubelet[2145]: E0213 15:36:44.240577 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Feb 13 15:36:44.407028 kubelet[2145]: E0213 15:36:44.406920 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Feb 13 15:36:44.453356 kubelet[2145]: E0213 15:36:44.453308 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:44.454164 containerd[1445]: time="2025-02-13T15:36:44.454083107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39acaef37933d0af7e26f644287931a2,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:44.458099 kubelet[2145]: E0213 15:36:44.458077 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:44.458556 containerd[1445]: time="2025-02-13T15:36:44.458488844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:44.462861 kubelet[2145]: E0213 15:36:44.462837 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:44.463397 containerd[1445]: time="2025-02-13T15:36:44.463364873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:44.642088 kubelet[2145]: I0213 15:36:44.641913 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:44.642457 kubelet[2145]: E0213 15:36:44.642430 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Feb 13 15:36:44.877030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975922138.mount: Deactivated successfully. Feb 13 15:36:44.881088 containerd[1445]: time="2025-02-13T15:36:44.880959727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:44.882741 containerd[1445]: time="2025-02-13T15:36:44.882708565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:44.884000 containerd[1445]: time="2025-02-13T15:36:44.883961503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:36:44.884516 containerd[1445]: time="2025-02-13T15:36:44.884438186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:36:44.886713 containerd[1445]: time="2025-02-13T15:36:44.886676134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:44.887994 containerd[1445]: time="2025-02-13T15:36:44.887940099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:36:44.891070 containerd[1445]: time="2025-02-13T15:36:44.890958975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 432.396017ms" Feb 13 15:36:44.891538 containerd[1445]: time="2025-02-13T15:36:44.891489196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:44.893276 containerd[1445]: time="2025-02-13T15:36:44.893109425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 429.680028ms" Feb 13 15:36:44.893771 containerd[1445]: time="2025-02-13T15:36:44.893746042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:44.894924 containerd[1445]: time="2025-02-13T15:36:44.894710916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 440.5503ms" Feb 13 15:36:45.047464 containerd[1445]: time="2025-02-13T15:36:45.047317225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:45.047982 containerd[1445]: time="2025-02-13T15:36:45.047870021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:45.048075 containerd[1445]: time="2025-02-13T15:36:45.047971557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:45.048146 containerd[1445]: time="2025-02-13T15:36:45.048063703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.048494 containerd[1445]: time="2025-02-13T15:36:45.048451827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:45.048698 containerd[1445]: time="2025-02-13T15:36:45.048636678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.048828 containerd[1445]: time="2025-02-13T15:36:45.048791000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.049555 containerd[1445]: time="2025-02-13T15:36:45.049219922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.050147 containerd[1445]: time="2025-02-13T15:36:45.050063341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:45.052600 containerd[1445]: time="2025-02-13T15:36:45.051389946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:45.052600 containerd[1445]: time="2025-02-13T15:36:45.051436019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.052600 containerd[1445]: time="2025-02-13T15:36:45.051528005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:45.068504 systemd[1]: Started cri-containerd-3bb15579aee12693d2b0c34de41cd7c97e3d1aafd35fda72c03ecc7bfcae11c3.scope - libcontainer container 3bb15579aee12693d2b0c34de41cd7c97e3d1aafd35fda72c03ecc7bfcae11c3. Feb 13 15:36:45.069652 systemd[1]: Started cri-containerd-8fb2b8240e56e2a7eb72f25a53cff08635b8b432a997ce0526f56ad9b86c1e17.scope - libcontainer container 8fb2b8240e56e2a7eb72f25a53cff08635b8b432a997ce0526f56ad9b86c1e17. Feb 13 15:36:45.074119 systemd[1]: Started cri-containerd-7f9823bcfe4f63ab2e7153a84c1f762c80ef4c60049e31385a7d6311a3499fa2.scope - libcontainer container 7f9823bcfe4f63ab2e7153a84c1f762c80ef4c60049e31385a7d6311a3499fa2. Feb 13 15:36:45.105629 containerd[1445]: time="2025-02-13T15:36:45.105576213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39acaef37933d0af7e26f644287931a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb15579aee12693d2b0c34de41cd7c97e3d1aafd35fda72c03ecc7bfcae11c3\"" Feb 13 15:36:45.106968 kubelet[2145]: E0213 15:36:45.106921 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.108140 containerd[1445]: time="2025-02-13T15:36:45.108084052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fb2b8240e56e2a7eb72f25a53cff08635b8b432a997ce0526f56ad9b86c1e17\"" Feb 13 15:36:45.108796 kubelet[2145]: E0213 15:36:45.108690 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.109937 containerd[1445]: time="2025-02-13T15:36:45.109908349Z" level=info msg="CreateContainer within sandbox \"3bb15579aee12693d2b0c34de41cd7c97e3d1aafd35fda72c03ecc7bfcae11c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:36:45.110273 containerd[1445]: time="2025-02-13T15:36:45.110221030Z" level=info msg="CreateContainer within sandbox \"8fb2b8240e56e2a7eb72f25a53cff08635b8b432a997ce0526f56ad9b86c1e17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:36:45.113338 containerd[1445]: time="2025-02-13T15:36:45.113309516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9823bcfe4f63ab2e7153a84c1f762c80ef4c60049e31385a7d6311a3499fa2\"" Feb 13 15:36:45.113861 kubelet[2145]: E0213 15:36:45.113837 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.115551 containerd[1445]: time="2025-02-13T15:36:45.115522176Z" level=info msg="CreateContainer within sandbox \"7f9823bcfe4f63ab2e7153a84c1f762c80ef4c60049e31385a7d6311a3499fa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:36:45.135349 containerd[1445]: time="2025-02-13T15:36:45.135125318Z" level=info msg="CreateContainer within sandbox \"8fb2b8240e56e2a7eb72f25a53cff08635b8b432a997ce0526f56ad9b86c1e17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ee8adc27403bd0e9c0395b8fc534972479fa76b4aa13a5268bc009f150418e6\"" Feb 13 15:36:45.137921 containerd[1445]: time="2025-02-13T15:36:45.137876469Z" level=info msg="StartContainer for \"4ee8adc27403bd0e9c0395b8fc534972479fa76b4aa13a5268bc009f150418e6\"" Feb 13 15:36:45.140191 containerd[1445]: time="2025-02-13T15:36:45.140135522Z" level=info msg="CreateContainer within sandbox \"7f9823bcfe4f63ab2e7153a84c1f762c80ef4c60049e31385a7d6311a3499fa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"495b7371a90ea7fac2e086abff38a93c23b00dad8dae6fefc1006371c71d2042\"" Feb 13 15:36:45.140686 containerd[1445]: time="2025-02-13T15:36:45.140648997Z" level=info msg="StartContainer for \"495b7371a90ea7fac2e086abff38a93c23b00dad8dae6fefc1006371c71d2042\"" Feb 13 15:36:45.141305 containerd[1445]: time="2025-02-13T15:36:45.141179616Z" level=info msg="CreateContainer within sandbox \"3bb15579aee12693d2b0c34de41cd7c97e3d1aafd35fda72c03ecc7bfcae11c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"827c910e875afbc6b662976bd82c90ac3b2b8d7d39865daf4aa5f75a47ee44f4\"" Feb 13 15:36:45.141715 kubelet[2145]: W0213 15:36:45.141632 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:45.141715 kubelet[2145]: E0213 15:36:45.141699 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:45.141715 kubelet[2145]: W0213 15:36:45.141696 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:45.141879 kubelet[2145]: E0213 15:36:45.141742 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:45.142790 containerd[1445]: time="2025-02-13T15:36:45.142005972Z" level=info msg="StartContainer for \"827c910e875afbc6b662976bd82c90ac3b2b8d7d39865daf4aa5f75a47ee44f4\"" Feb 13 15:36:45.172437 systemd[1]: Started cri-containerd-4ee8adc27403bd0e9c0395b8fc534972479fa76b4aa13a5268bc009f150418e6.scope - libcontainer container 4ee8adc27403bd0e9c0395b8fc534972479fa76b4aa13a5268bc009f150418e6. Feb 13 15:36:45.177050 systemd[1]: Started cri-containerd-495b7371a90ea7fac2e086abff38a93c23b00dad8dae6fefc1006371c71d2042.scope - libcontainer container 495b7371a90ea7fac2e086abff38a93c23b00dad8dae6fefc1006371c71d2042. Feb 13 15:36:45.178703 systemd[1]: Started cri-containerd-827c910e875afbc6b662976bd82c90ac3b2b8d7d39865daf4aa5f75a47ee44f4.scope - libcontainer container 827c910e875afbc6b662976bd82c90ac3b2b8d7d39865daf4aa5f75a47ee44f4. Feb 13 15:36:45.210287 kubelet[2145]: E0213 15:36:45.210200 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Feb 13 15:36:45.242879 containerd[1445]: time="2025-02-13T15:36:45.242828735Z" level=info msg="StartContainer for \"4ee8adc27403bd0e9c0395b8fc534972479fa76b4aa13a5268bc009f150418e6\" returns successfully" Feb 13 15:36:45.253277 containerd[1445]: time="2025-02-13T15:36:45.248504618Z" level=info msg="StartContainer for \"827c910e875afbc6b662976bd82c90ac3b2b8d7d39865daf4aa5f75a47ee44f4\" returns successfully" Feb 13 15:36:45.253277 containerd[1445]: time="2025-02-13T15:36:45.248576225Z" level=info msg="StartContainer for \"495b7371a90ea7fac2e086abff38a93c23b00dad8dae6fefc1006371c71d2042\" returns successfully" Feb 13 15:36:45.259776 kubelet[2145]: W0213 15:36:45.254948 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:45.259776 kubelet[2145]: E0213 15:36:45.255012 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:45.317468 kubelet[2145]: W0213 15:36:45.317395 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Feb 13 15:36:45.317468 kubelet[2145]: E0213 15:36:45.317469 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:45.444436 kubelet[2145]: I0213 15:36:45.444329 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:45.831902 kubelet[2145]: E0213 15:36:45.831797 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.832771 kubelet[2145]: E0213 15:36:45.832742 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.834440 kubelet[2145]: E0213 15:36:45.834416 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:46.836096 kubelet[2145]: E0213 15:36:46.836064 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:46.894321 kubelet[2145]: E0213 15:36:46.894277 2145 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:36:47.080607 kubelet[2145]: I0213 15:36:47.080563 2145 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:36:47.080607 kubelet[2145]: E0213 15:36:47.080608 2145 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:36:47.090553 kubelet[2145]: E0213 15:36:47.090455 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.190943 kubelet[2145]: E0213 15:36:47.190901 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.291988 kubelet[2145]: E0213 15:36:47.291953 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.392940 kubelet[2145]: E0213 15:36:47.392833 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.493616 kubelet[2145]: E0213 15:36:47.493566 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.594162 kubelet[2145]: E0213 15:36:47.594119 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.695297 kubelet[2145]: E0213 15:36:47.694646 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.794836 kubelet[2145]: E0213 15:36:47.794785 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:47.895676 kubelet[2145]: E0213 15:36:47.895638 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:48.796578 kubelet[2145]: I0213 15:36:48.796536 2145 apiserver.go:52] "Watching apiserver" Feb 13 15:36:48.801385 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Feb 13 15:36:48.801403 systemd[1]: Reloading... Feb 13 15:36:48.804965 kubelet[2145]: I0213 15:36:48.804921 2145 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:36:48.865271 zram_generator::config[2467]: No configuration found. Feb 13 15:36:48.950161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:49.016477 systemd[1]: Reloading finished in 214 ms. Feb 13 15:36:49.048043 kubelet[2145]: I0213 15:36:49.047643 2145 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:36:49.047776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:49.062623 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:36:49.062820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:49.062868 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 117.6M memory peak, 0B memory swap peak. Feb 13 15:36:49.071627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:49.163298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:49.168171 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:36:49.217434 kubelet[2508]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:49.217434 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:36:49.217434 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:49.217801 kubelet[2508]: I0213 15:36:49.217477 2508 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:36:49.225040 kubelet[2508]: I0213 15:36:49.223436 2508 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:36:49.225040 kubelet[2508]: I0213 15:36:49.223463 2508 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:36:49.225040 kubelet[2508]: I0213 15:36:49.223704 2508 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:36:49.225196 kubelet[2508]: I0213 15:36:49.225047 2508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:36:49.227301 kubelet[2508]: I0213 15:36:49.227176 2508 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:36:49.230430 kubelet[2508]: E0213 15:36:49.230401 2508 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:36:49.230430 kubelet[2508]: I0213 15:36:49.230429 2508 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:36:49.232671 kubelet[2508]: I0213 15:36:49.232646 2508 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:36:49.232794 kubelet[2508]: I0213 15:36:49.232771 2508 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:36:49.232916 kubelet[2508]: I0213 15:36:49.232883 2508 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:36:49.233069 kubelet[2508]: I0213 15:36:49.232908 2508 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:36:49.233146 kubelet[2508]: I0213 15:36:49.233074 2508 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:36:49.233146 kubelet[2508]: I0213 15:36:49.233084 2508 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:36:49.233146 kubelet[2508]: I0213 15:36:49.233112 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:49.233225 kubelet[2508]: I0213 15:36:49.233219 2508 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:36:49.233273 kubelet[2508]: I0213 15:36:49.233231 2508 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:36:49.233273 kubelet[2508]: I0213 15:36:49.233266 2508 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:36:49.233320 kubelet[2508]: I0213 15:36:49.233276 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:36:49.237261 kubelet[2508]: I0213 15:36:49.234599 2508 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:36:49.237261 kubelet[2508]: I0213 15:36:49.235104 2508 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:36:49.237261 kubelet[2508]: I0213 15:36:49.235558 2508 server.go:1269] "Started kubelet" Feb 13 15:36:49.237261 kubelet[2508]: I0213 15:36:49.236565 2508 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:36:49.237261 kubelet[2508]: I0213 15:36:49.236782 2508 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:36:49.237805 kubelet[2508]: I0213 15:36:49.237446 2508 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:36:49.237980 kubelet[2508]: I0213 15:36:49.237954 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:36:49.239199 kubelet[2508]: I0213 15:36:49.239154 2508 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:36:49.245092 kubelet[2508]: E0213 15:36:49.245062 2508 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:36:49.246678 kubelet[2508]: I0213 15:36:49.246657 2508 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:36:49.246793 kubelet[2508]: I0213 15:36:49.246764 2508 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:36:49.246915 kubelet[2508]: I0213 15:36:49.246889 2508 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:36:49.247111 kubelet[2508]: I0213 15:36:49.246946 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:36:49.247234 kubelet[2508]: I0213 15:36:49.247216 2508 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:36:49.249853 kubelet[2508]: I0213 15:36:49.247855 2508 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:36:49.249853 kubelet[2508]: E0213 15:36:49.248852 2508 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:49.250126 kubelet[2508]: I0213 15:36:49.250111 2508 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:36:49.260101 kubelet[2508]: I0213 15:36:49.260062 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:36:49.261000 kubelet[2508]: I0213 15:36:49.260976 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:36:49.261000 kubelet[2508]: I0213 15:36:49.260997 2508 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:36:49.261073 kubelet[2508]: I0213 15:36:49.261012 2508 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:36:49.261073 kubelet[2508]: E0213 15:36:49.261050 2508 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:36:49.288351 kubelet[2508]: I0213 15:36:49.288314 2508 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:36:49.288351 kubelet[2508]: I0213 15:36:49.288332 2508 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:36:49.288351 kubelet[2508]: I0213 15:36:49.288353 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:49.288505 kubelet[2508]: I0213 15:36:49.288487 2508 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:36:49.288538 kubelet[2508]: I0213 15:36:49.288497 2508 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:36:49.288538 kubelet[2508]: I0213 15:36:49.288514 2508 policy_none.go:49] "None policy: Start" Feb 13 15:36:49.289086 kubelet[2508]: I0213 15:36:49.289045 2508 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:36:49.289086 kubelet[2508]: I0213 15:36:49.289071 2508 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:36:49.289235 kubelet[2508]: I0213 15:36:49.289219 2508 state_mem.go:75] "Updated machine memory state" Feb 13 15:36:49.292972 kubelet[2508]: I0213 15:36:49.292945 2508 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:36:49.293156 kubelet[2508]: I0213 15:36:49.293098 2508 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:36:49.293156 kubelet[2508]: I0213 15:36:49.293110 2508 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:36:49.293629 kubelet[2508]: I0213 15:36:49.293615 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:36:49.396968 kubelet[2508]: I0213 15:36:49.396879 2508 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:49.405978 kubelet[2508]: I0213 15:36:49.405949 2508 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:36:49.406103 kubelet[2508]: I0213 15:36:49.406028 2508 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:36:49.451616 kubelet[2508]: I0213 15:36:49.451574 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:49.451616 kubelet[2508]: I0213 15:36:49.451615 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:49.451816 kubelet[2508]: I0213 15:36:49.451642 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:49.451816 kubelet[2508]: I0213 15:36:49.451663 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:49.451816 kubelet[2508]: I0213 15:36:49.451678 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:49.451816 kubelet[2508]: I0213 15:36:49.451702 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:49.451816 kubelet[2508]: I0213 15:36:49.451717 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39acaef37933d0af7e26f644287931a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39acaef37933d0af7e26f644287931a2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:49.451941 kubelet[2508]: I0213 15:36:49.451731 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:49.451941 kubelet[2508]: I0213 15:36:49.451746 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:36:49.674106 kubelet[2508]: E0213 15:36:49.674064 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:49.674444 kubelet[2508]: E0213 15:36:49.674392 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:49.674520 kubelet[2508]: E0213 15:36:49.674451 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:49.802844 sudo[2545]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:36:49.803137 sudo[2545]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:36:50.218347 sudo[2545]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:50.235709 kubelet[2508]: I0213 15:36:50.234257 2508 apiserver.go:52] "Watching apiserver" Feb 13 15:36:50.247874 kubelet[2508]: I0213 15:36:50.247818 2508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:36:50.274280 kubelet[2508]: E0213 15:36:50.273539 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:50.274280 kubelet[2508]: E0213 15:36:50.274064 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:50.278334 kubelet[2508]: E0213 15:36:50.278129 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:50.278334 kubelet[2508]: E0213 15:36:50.278271 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:50.338524 kubelet[2508]: I0213 15:36:50.338466 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.338437064 podStartE2EDuration="1.338437064s" podCreationTimestamp="2025-02-13 15:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:50.338124827 +0000 UTC m=+1.166940020" watchObservedRunningTime="2025-02-13 15:36:50.338437064 +0000 UTC m=+1.167252257" Feb 13 15:36:50.359374 kubelet[2508]: I0213 15:36:50.357369 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3573534010000001 podStartE2EDuration="1.357353401s" podCreationTimestamp="2025-02-13 15:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:50.345887004 +0000 UTC m=+1.174702197" watchObservedRunningTime="2025-02-13 15:36:50.357353401 +0000 UTC m=+1.186168594" Feb 13 15:36:50.359509 kubelet[2508]: I0213 15:36:50.359426 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.359411883 podStartE2EDuration="1.359411883s" podCreationTimestamp="2025-02-13 15:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:50.357294192 +0000 UTC m=+1.186109385" watchObservedRunningTime="2025-02-13 15:36:50.359411883 +0000 UTC m=+1.188227076" Feb 13 15:36:51.274764 kubelet[2508]: E0213 15:36:51.274734 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:51.979464 sudo[1623]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:51.980494 sshd[1622]: Connection closed by 10.0.0.1 port 60578 Feb 13 15:36:51.980965 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:51.984184 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:60578.service: Deactivated successfully. Feb 13 15:36:51.985807 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:36:51.985952 systemd[1]: session-7.scope: Consumed 7.293s CPU time, 155.6M memory peak, 0B memory swap peak. Feb 13 15:36:51.986382 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:36:51.987161 systemd-logind[1423]: Removed session 7. Feb 13 15:36:52.277332 kubelet[2508]: E0213 15:36:52.277206 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:52.928578 kubelet[2508]: E0213 15:36:52.928542 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:54.654543 kubelet[2508]: E0213 15:36:54.654447 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:56.157295 kubelet[2508]: I0213 15:36:56.157259 2508 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:36:56.157731 containerd[1445]: time="2025-02-13T15:36:56.157701699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:36:56.158661 kubelet[2508]: I0213 15:36:56.158136 2508 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:36:57.082711 systemd[1]: Created slice kubepods-besteffort-pod866ef13c_5d8c_49d4_889d_951c10186cde.slice - libcontainer container kubepods-besteffort-pod866ef13c_5d8c_49d4_889d_951c10186cde.slice. Feb 13 15:36:57.102109 kubelet[2508]: I0213 15:36:57.101395 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hostproc\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102109 kubelet[2508]: I0213 15:36:57.102020 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-bpf-maps\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102109 kubelet[2508]: I0213 15:36:57.102092 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-kernel\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102109 kubelet[2508]: I0213 15:36:57.102114 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hubble-tls\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102452 kubelet[2508]: I0213 15:36:57.102133 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-cgroup\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102452 kubelet[2508]: I0213 15:36:57.102189 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-lib-modules\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102452 kubelet[2508]: I0213 15:36:57.102208 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-clustermesh-secrets\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102452 kubelet[2508]: I0213 15:36:57.102235 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/866ef13c-5d8c-49d4-889d-951c10186cde-kube-proxy\") pod \"kube-proxy-qr4ws\" (UID: \"866ef13c-5d8c-49d4-889d-951c10186cde\") " pod="kube-system/kube-proxy-qr4ws" Feb 13 15:36:57.102452 kubelet[2508]: I0213 15:36:57.102275 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqfq\" (UniqueName: \"kubernetes.io/projected/866ef13c-5d8c-49d4-889d-951c10186cde-kube-api-access-7cqfq\") pod \"kube-proxy-qr4ws\" (UID: \"866ef13c-5d8c-49d4-889d-951c10186cde\") " pod="kube-system/kube-proxy-qr4ws" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102294 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-xtables-lock\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102472 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-config-path\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102556 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzhs\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-kube-api-access-6hzhs\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102595 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866ef13c-5d8c-49d4-889d-951c10186cde-lib-modules\") pod \"kube-proxy-qr4ws\" (UID: \"866ef13c-5d8c-49d4-889d-951c10186cde\") " pod="kube-system/kube-proxy-qr4ws" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102628 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-etc-cni-netd\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.102888 kubelet[2508]: I0213 15:36:57.102646 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cni-path\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.103029 kubelet[2508]: I0213 15:36:57.102662 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866ef13c-5d8c-49d4-889d-951c10186cde-xtables-lock\") pod \"kube-proxy-qr4ws\" (UID: \"866ef13c-5d8c-49d4-889d-951c10186cde\") " pod="kube-system/kube-proxy-qr4ws" Feb 13 15:36:57.103029 kubelet[2508]: I0213 15:36:57.102676 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-run\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.103029 kubelet[2508]: I0213 15:36:57.102691 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-net\") pod \"cilium-vkt8l\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " pod="kube-system/cilium-vkt8l" Feb 13 15:36:57.105171 systemd[1]: Created slice kubepods-burstable-podf2e2d9bf_64f3_4fb1_9bb4_a66b321d082b.slice - libcontainer container kubepods-burstable-podf2e2d9bf_64f3_4fb1_9bb4_a66b321d082b.slice. Feb 13 15:36:57.373945 systemd[1]: Created slice kubepods-besteffort-pod9f6d5097_9e47_4b49_bddd_b48310d8ef4e.slice - libcontainer container kubepods-besteffort-pod9f6d5097_9e47_4b49_bddd_b48310d8ef4e.slice. Feb 13 15:36:57.402642 kubelet[2508]: E0213 15:36:57.402600 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.403214 containerd[1445]: time="2025-02-13T15:36:57.403172249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qr4ws,Uid:866ef13c-5d8c-49d4-889d-951c10186cde,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:57.404737 kubelet[2508]: I0213 15:36:57.404651 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snh9r\" (UniqueName: \"kubernetes.io/projected/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-kube-api-access-snh9r\") pod \"cilium-operator-5d85765b45-v9rmr\" (UID: \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\") " pod="kube-system/cilium-operator-5d85765b45-v9rmr" Feb 13 15:36:57.404737 kubelet[2508]: I0213 15:36:57.404694 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-cilium-config-path\") pod \"cilium-operator-5d85765b45-v9rmr\" (UID: \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\") " pod="kube-system/cilium-operator-5d85765b45-v9rmr" Feb 13 15:36:57.408508 kubelet[2508]: E0213 15:36:57.408480 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.410466 containerd[1445]: time="2025-02-13T15:36:57.410409456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkt8l,Uid:f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:57.430684 containerd[1445]: time="2025-02-13T15:36:57.430605923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:57.430806 containerd[1445]: time="2025-02-13T15:36:57.430666843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:57.430806 containerd[1445]: time="2025-02-13T15:36:57.430684203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.430806 containerd[1445]: time="2025-02-13T15:36:57.430762082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.434627 containerd[1445]: time="2025-02-13T15:36:57.434438385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:57.434627 containerd[1445]: time="2025-02-13T15:36:57.434495425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:57.434627 containerd[1445]: time="2025-02-13T15:36:57.434510585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.434890 containerd[1445]: time="2025-02-13T15:36:57.434792184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.449502 systemd[1]: Started cri-containerd-b837cc7b9e797ef5e2e9bb7fc08d4edd77f837837239b566e7af99b9f947791f.scope - libcontainer container b837cc7b9e797ef5e2e9bb7fc08d4edd77f837837239b566e7af99b9f947791f. Feb 13 15:36:57.452659 systemd[1]: Started cri-containerd-f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356.scope - libcontainer container f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356. Feb 13 15:36:57.474762 containerd[1445]: time="2025-02-13T15:36:57.474726119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qr4ws,Uid:866ef13c-5d8c-49d4-889d-951c10186cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"b837cc7b9e797ef5e2e9bb7fc08d4edd77f837837239b566e7af99b9f947791f\"" Feb 13 15:36:57.475453 kubelet[2508]: E0213 15:36:57.475433 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.478902 containerd[1445]: time="2025-02-13T15:36:57.478461022Z" level=info msg="CreateContainer within sandbox \"b837cc7b9e797ef5e2e9bb7fc08d4edd77f837837239b566e7af99b9f947791f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:36:57.479332 containerd[1445]: time="2025-02-13T15:36:57.479290138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkt8l,Uid:f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\"" Feb 13 15:36:57.480772 kubelet[2508]: E0213 15:36:57.480745 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.483953 containerd[1445]: time="2025-02-13T15:36:57.483871997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:36:57.498048 containerd[1445]: time="2025-02-13T15:36:57.497995172Z" level=info msg="CreateContainer within sandbox \"b837cc7b9e797ef5e2e9bb7fc08d4edd77f837837239b566e7af99b9f947791f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05672fc6c68ddc9307fdb14a9f1615db4d7d8d4b555a46f7ab65c27735a205ef\"" Feb 13 15:36:57.499427 containerd[1445]: time="2025-02-13T15:36:57.499383486Z" level=info msg="StartContainer for \"05672fc6c68ddc9307fdb14a9f1615db4d7d8d4b555a46f7ab65c27735a205ef\"" Feb 13 15:36:57.529427 systemd[1]: Started cri-containerd-05672fc6c68ddc9307fdb14a9f1615db4d7d8d4b555a46f7ab65c27735a205ef.scope - libcontainer container 05672fc6c68ddc9307fdb14a9f1615db4d7d8d4b555a46f7ab65c27735a205ef. Feb 13 15:36:57.555955 containerd[1445]: time="2025-02-13T15:36:57.555909185Z" level=info msg="StartContainer for \"05672fc6c68ddc9307fdb14a9f1615db4d7d8d4b555a46f7ab65c27735a205ef\" returns successfully" Feb 13 15:36:57.679399 kubelet[2508]: E0213 15:36:57.678014 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.679543 containerd[1445]: time="2025-02-13T15:36:57.678872217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v9rmr,Uid:9f6d5097-9e47-4b49-bddd-b48310d8ef4e,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:57.709972 containerd[1445]: time="2025-02-13T15:36:57.709877474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:57.709972 containerd[1445]: time="2025-02-13T15:36:57.709931954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:57.709972 containerd[1445]: time="2025-02-13T15:36:57.709943834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.710204 containerd[1445]: time="2025-02-13T15:36:57.710021874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:57.730469 systemd[1]: Started cri-containerd-5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d.scope - libcontainer container 5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d. Feb 13 15:36:57.761211 containerd[1445]: time="2025-02-13T15:36:57.761172798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v9rmr,Uid:9f6d5097-9e47-4b49-bddd-b48310d8ef4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\"" Feb 13 15:36:57.761938 kubelet[2508]: E0213 15:36:57.761913 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.289190 kubelet[2508]: E0213 15:36:58.289156 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.298016 kubelet[2508]: I0213 15:36:58.297945 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qr4ws" podStartSLOduration=1.297916195 podStartE2EDuration="1.297916195s" podCreationTimestamp="2025-02-13 15:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:58.297549917 +0000 UTC m=+9.126365110" watchObservedRunningTime="2025-02-13 15:36:58.297916195 +0000 UTC m=+9.126731388" Feb 13 15:37:01.958046 kubelet[2508]: E0213 15:37:01.957846 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:02.888831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301964954.mount: Deactivated successfully. Feb 13 15:37:03.082159 kubelet[2508]: E0213 15:37:03.082108 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:04.220191 containerd[1445]: time="2025-02-13T15:37:04.219700496Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:04.220708 containerd[1445]: time="2025-02-13T15:37:04.220271654Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:37:04.221533 containerd[1445]: time="2025-02-13T15:37:04.221492130Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:04.224151 containerd[1445]: time="2025-02-13T15:37:04.224112082Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.740156685s" Feb 13 15:37:04.224219 containerd[1445]: time="2025-02-13T15:37:04.224151602Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:37:04.228194 containerd[1445]: time="2025-02-13T15:37:04.227993270Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:37:04.232534 containerd[1445]: time="2025-02-13T15:37:04.231283139Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:37:04.244865 containerd[1445]: time="2025-02-13T15:37:04.244802016Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\"" Feb 13 15:37:04.245175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198108256.mount: Deactivated successfully. Feb 13 15:37:04.246057 containerd[1445]: time="2025-02-13T15:37:04.245522894Z" level=info msg="StartContainer for \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\"" Feb 13 15:37:04.271435 systemd[1]: Started cri-containerd-d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b.scope - libcontainer container d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b. Feb 13 15:37:04.293855 containerd[1445]: time="2025-02-13T15:37:04.292171066Z" level=info msg="StartContainer for \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\" returns successfully" Feb 13 15:37:04.304844 kubelet[2508]: E0213 15:37:04.304515 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:04.338448 systemd[1]: cri-containerd-d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b.scope: Deactivated successfully. Feb 13 15:37:04.420437 containerd[1445]: time="2025-02-13T15:37:04.409695134Z" level=info msg="shim disconnected" id=d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b namespace=k8s.io Feb 13 15:37:04.420437 containerd[1445]: time="2025-02-13T15:37:04.420424100Z" level=warning msg="cleaning up after shim disconnected" id=d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b namespace=k8s.io Feb 13 15:37:04.420437 containerd[1445]: time="2025-02-13T15:37:04.420442420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:04.668995 kubelet[2508]: E0213 15:37:04.668957 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:05.243144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b-rootfs.mount: Deactivated successfully. Feb 13 15:37:05.308485 kubelet[2508]: E0213 15:37:05.308444 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:05.311029 containerd[1445]: time="2025-02-13T15:37:05.310990885Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:37:05.327381 containerd[1445]: time="2025-02-13T15:37:05.327262116Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\"" Feb 13 15:37:05.328645 containerd[1445]: time="2025-02-13T15:37:05.328615792Z" level=info msg="StartContainer for \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\"" Feb 13 15:37:05.357426 systemd[1]: Started cri-containerd-0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079.scope - libcontainer container 0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079. Feb 13 15:37:05.378545 containerd[1445]: time="2025-02-13T15:37:05.378497882Z" level=info msg="StartContainer for \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\" returns successfully" Feb 13 15:37:05.406037 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:37:05.406279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:05.406348 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:05.413615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:05.413841 systemd[1]: cri-containerd-0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079.scope: Deactivated successfully. Feb 13 15:37:05.444588 containerd[1445]: time="2025-02-13T15:37:05.444477883Z" level=info msg="shim disconnected" id=0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079 namespace=k8s.io Feb 13 15:37:05.444588 containerd[1445]: time="2025-02-13T15:37:05.444528283Z" level=warning msg="cleaning up after shim disconnected" id=0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079 namespace=k8s.io Feb 13 15:37:05.444588 containerd[1445]: time="2025-02-13T15:37:05.444536163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:05.449268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:06.130795 update_engine[1425]: I20250213 15:37:06.130734 1425 update_attempter.cc:509] Updating boot flags... Feb 13 15:37:06.150270 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3050) Feb 13 15:37:06.179320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3050) Feb 13 15:37:06.242798 systemd[1]: run-containerd-runc-k8s.io-0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079-runc.eXHC7A.mount: Deactivated successfully. Feb 13 15:37:06.242900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079-rootfs.mount: Deactivated successfully. Feb 13 15:37:06.311719 kubelet[2508]: E0213 15:37:06.311687 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:06.313847 containerd[1445]: time="2025-02-13T15:37:06.313788151Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:37:06.333517 containerd[1445]: time="2025-02-13T15:37:06.333470734Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\"" Feb 13 15:37:06.334001 containerd[1445]: time="2025-02-13T15:37:06.333979573Z" level=info msg="StartContainer for \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\"" Feb 13 15:37:06.369401 systemd[1]: Started cri-containerd-4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258.scope - libcontainer container 4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258. Feb 13 15:37:06.399604 containerd[1445]: time="2025-02-13T15:37:06.399345546Z" level=info msg="StartContainer for \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\" returns successfully" Feb 13 15:37:06.402431 systemd[1]: cri-containerd-4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258.scope: Deactivated successfully. Feb 13 15:37:06.423117 containerd[1445]: time="2025-02-13T15:37:06.423051478Z" level=info msg="shim disconnected" id=4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258 namespace=k8s.io Feb 13 15:37:06.423117 containerd[1445]: time="2025-02-13T15:37:06.423104438Z" level=warning msg="cleaning up after shim disconnected" id=4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258 namespace=k8s.io Feb 13 15:37:06.423117 containerd[1445]: time="2025-02-13T15:37:06.423116957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:07.242939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258-rootfs.mount: Deactivated successfully. Feb 13 15:37:07.315721 kubelet[2508]: E0213 15:37:07.315679 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:07.319034 containerd[1445]: time="2025-02-13T15:37:07.318995956Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:37:07.331976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113612775.mount: Deactivated successfully. Feb 13 15:37:07.334952 containerd[1445]: time="2025-02-13T15:37:07.334836153Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\"" Feb 13 15:37:07.335497 containerd[1445]: time="2025-02-13T15:37:07.335426071Z" level=info msg="StartContainer for \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\"" Feb 13 15:37:07.366419 systemd[1]: Started cri-containerd-a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0.scope - libcontainer container a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0. Feb 13 15:37:07.385712 systemd[1]: cri-containerd-a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0.scope: Deactivated successfully. Feb 13 15:37:07.388021 containerd[1445]: time="2025-02-13T15:37:07.387698728Z" level=info msg="StartContainer for \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\" returns successfully" Feb 13 15:37:07.407296 containerd[1445]: time="2025-02-13T15:37:07.407065276Z" level=info msg="shim disconnected" id=a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0 namespace=k8s.io Feb 13 15:37:07.407296 containerd[1445]: time="2025-02-13T15:37:07.407134275Z" level=warning msg="cleaning up after shim disconnected" id=a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0 namespace=k8s.io Feb 13 15:37:07.407296 containerd[1445]: time="2025-02-13T15:37:07.407142595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:08.243047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0-rootfs.mount: Deactivated successfully. Feb 13 15:37:08.318834 kubelet[2508]: E0213 15:37:08.318778 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.323520 containerd[1445]: time="2025-02-13T15:37:08.323449419Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:37:08.356272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748121374.mount: Deactivated successfully. Feb 13 15:37:08.356963 containerd[1445]: time="2025-02-13T15:37:08.356673333Z" level=info msg="CreateContainer within sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\"" Feb 13 15:37:08.357618 containerd[1445]: time="2025-02-13T15:37:08.357383011Z" level=info msg="StartContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\"" Feb 13 15:37:08.392485 systemd[1]: Started cri-containerd-c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027.scope - libcontainer container c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027. Feb 13 15:37:08.419856 containerd[1445]: time="2025-02-13T15:37:08.419738449Z" level=info msg="StartContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" returns successfully" Feb 13 15:37:08.571084 kubelet[2508]: I0213 15:37:08.570808 2508 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:37:08.615934 systemd[1]: Created slice kubepods-burstable-pod7114726e_9b81_43c3_8496_04721a458b1c.slice - libcontainer container kubepods-burstable-pod7114726e_9b81_43c3_8496_04721a458b1c.slice. Feb 13 15:37:08.620473 systemd[1]: Created slice kubepods-burstable-podbc0fc116_115b_4e2e_8f8c_49cac06e78ef.slice - libcontainer container kubepods-burstable-podbc0fc116_115b_4e2e_8f8c_49cac06e78ef.slice. Feb 13 15:37:08.787450 kubelet[2508]: I0213 15:37:08.787392 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7114726e-9b81-43c3-8496-04721a458b1c-config-volume\") pod \"coredns-6f6b679f8f-f5lf5\" (UID: \"7114726e-9b81-43c3-8496-04721a458b1c\") " pod="kube-system/coredns-6f6b679f8f-f5lf5" Feb 13 15:37:08.787450 kubelet[2508]: I0213 15:37:08.787447 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc0fc116-115b-4e2e-8f8c-49cac06e78ef-config-volume\") pod \"coredns-6f6b679f8f-426nw\" (UID: \"bc0fc116-115b-4e2e-8f8c-49cac06e78ef\") " pod="kube-system/coredns-6f6b679f8f-426nw" Feb 13 15:37:08.787615 kubelet[2508]: I0213 15:37:08.787466 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hxw\" (UniqueName: \"kubernetes.io/projected/bc0fc116-115b-4e2e-8f8c-49cac06e78ef-kube-api-access-d4hxw\") pod \"coredns-6f6b679f8f-426nw\" (UID: \"bc0fc116-115b-4e2e-8f8c-49cac06e78ef\") " pod="kube-system/coredns-6f6b679f8f-426nw" Feb 13 15:37:08.787615 kubelet[2508]: I0213 15:37:08.787488 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrzl\" (UniqueName: \"kubernetes.io/projected/7114726e-9b81-43c3-8496-04721a458b1c-kube-api-access-nbrzl\") pod \"coredns-6f6b679f8f-f5lf5\" (UID: \"7114726e-9b81-43c3-8496-04721a458b1c\") " pod="kube-system/coredns-6f6b679f8f-f5lf5" Feb 13 15:37:08.918909 kubelet[2508]: E0213 15:37:08.918864 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.920262 containerd[1445]: time="2025-02-13T15:37:08.919838992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5lf5,Uid:7114726e-9b81-43c3-8496-04721a458b1c,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:08.924795 kubelet[2508]: E0213 15:37:08.924312 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:08.929784 containerd[1445]: time="2025-02-13T15:37:08.929579046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-426nw,Uid:bc0fc116-115b-4e2e-8f8c-49cac06e78ef,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:09.323150 kubelet[2508]: E0213 15:37:09.323029 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:09.339910 kubelet[2508]: I0213 15:37:09.339521 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vkt8l" podStartSLOduration=5.594272236 podStartE2EDuration="12.339295264s" podCreationTimestamp="2025-02-13 15:36:57 +0000 UTC" firstStartedPulling="2025-02-13 15:36:57.482608923 +0000 UTC m=+8.311424116" lastFinishedPulling="2025-02-13 15:37:04.227631951 +0000 UTC m=+15.056447144" observedRunningTime="2025-02-13 15:37:09.33715543 +0000 UTC m=+20.165970623" watchObservedRunningTime="2025-02-13 15:37:09.339295264 +0000 UTC m=+20.168110457" Feb 13 15:37:10.283016 containerd[1445]: time="2025-02-13T15:37:10.282962283Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:10.283525 containerd[1445]: time="2025-02-13T15:37:10.283481161Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:37:10.284039 containerd[1445]: time="2025-02-13T15:37:10.284009200Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:10.285406 containerd[1445]: time="2025-02-13T15:37:10.285376277Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.057345687s" Feb 13 15:37:10.285464 containerd[1445]: time="2025-02-13T15:37:10.285407637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:37:10.288734 containerd[1445]: time="2025-02-13T15:37:10.288616709Z" level=info msg="CreateContainer within sandbox \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:37:10.300329 containerd[1445]: time="2025-02-13T15:37:10.300293522Z" level=info msg="CreateContainer within sandbox \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\"" Feb 13 15:37:10.300877 containerd[1445]: time="2025-02-13T15:37:10.300675121Z" level=info msg="StartContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\"" Feb 13 15:37:10.325390 systemd[1]: Started cri-containerd-1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3.scope - libcontainer container 1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3. Feb 13 15:37:10.326227 kubelet[2508]: E0213 15:37:10.326190 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:10.344900 containerd[1445]: time="2025-02-13T15:37:10.344779257Z" level=info msg="StartContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" returns successfully" Feb 13 15:37:11.328694 kubelet[2508]: E0213 15:37:11.328657 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:11.328694 kubelet[2508]: E0213 15:37:11.328846 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:11.338307 kubelet[2508]: I0213 15:37:11.338238 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v9rmr" podStartSLOduration=1.8152436220000001 podStartE2EDuration="14.33822359s" podCreationTimestamp="2025-02-13 15:36:57 +0000 UTC" firstStartedPulling="2025-02-13 15:36:57.763682586 +0000 UTC m=+8.592497779" lastFinishedPulling="2025-02-13 15:37:10.286662554 +0000 UTC m=+21.115477747" observedRunningTime="2025-02-13 15:37:11.336880993 +0000 UTC m=+22.165696186" watchObservedRunningTime="2025-02-13 15:37:11.33822359 +0000 UTC m=+22.167038783" Feb 13 15:37:12.334409 kubelet[2508]: E0213 15:37:12.334378 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:14.395222 systemd-networkd[1375]: cilium_host: Link UP Feb 13 15:37:14.395855 systemd-networkd[1375]: cilium_net: Link UP Feb 13 15:37:14.396471 systemd-networkd[1375]: cilium_net: Gained carrier Feb 13 15:37:14.396880 systemd-networkd[1375]: cilium_host: Gained carrier Feb 13 15:37:14.397332 systemd-networkd[1375]: cilium_net: Gained IPv6LL Feb 13 15:37:14.397822 systemd-networkd[1375]: cilium_host: Gained IPv6LL Feb 13 15:37:14.483612 systemd-networkd[1375]: cilium_vxlan: Link UP Feb 13 15:37:14.483619 systemd-networkd[1375]: cilium_vxlan: Gained carrier Feb 13 15:37:14.808282 kernel: NET: Registered PF_ALG protocol family Feb 13 15:37:15.118655 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:53536.service - OpenSSH per-connection server daemon (10.0.0.1:53536). Feb 13 15:37:15.174076 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 53536 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:15.177103 sshd-session[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:15.181056 systemd-logind[1423]: New session 8 of user core. Feb 13 15:37:15.188420 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:37:15.322369 sshd[3580]: Connection closed by 10.0.0.1 port 53536 Feb 13 15:37:15.323038 sshd-session[3566]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:15.326176 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:53536.service: Deactivated successfully. Feb 13 15:37:15.328724 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:37:15.330716 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:37:15.332072 systemd-logind[1423]: Removed session 8. Feb 13 15:37:15.442213 systemd-networkd[1375]: lxc_health: Link UP Feb 13 15:37:15.457134 systemd-networkd[1375]: lxc_health: Gained carrier Feb 13 15:37:15.624683 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Feb 13 15:37:16.029510 systemd-networkd[1375]: lxcef96f3bcdacd: Link UP Feb 13 15:37:16.046275 kernel: eth0: renamed from tmp020a6 Feb 13 15:37:16.054289 kernel: eth0: renamed from tmp53f76 Feb 13 15:37:16.057719 systemd-networkd[1375]: lxcbc0ad737042c: Link UP Feb 13 15:37:16.059181 systemd-networkd[1375]: lxcbc0ad737042c: Gained carrier Feb 13 15:37:16.059453 systemd-networkd[1375]: lxcef96f3bcdacd: Gained carrier Feb 13 15:37:17.032437 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 13 15:37:17.417353 systemd-networkd[1375]: lxcbc0ad737042c: Gained IPv6LL Feb 13 15:37:17.423166 kubelet[2508]: E0213 15:37:17.423141 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:18.056374 systemd-networkd[1375]: lxcef96f3bcdacd: Gained IPv6LL Feb 13 15:37:19.585205 containerd[1445]: time="2025-02-13T15:37:19.585086890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:19.585205 containerd[1445]: time="2025-02-13T15:37:19.585160530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:19.585205 containerd[1445]: time="2025-02-13T15:37:19.585177770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:19.585906 containerd[1445]: time="2025-02-13T15:37:19.585422930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:19.593912 containerd[1445]: time="2025-02-13T15:37:19.593511477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:19.593912 containerd[1445]: time="2025-02-13T15:37:19.593578477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:19.593912 containerd[1445]: time="2025-02-13T15:37:19.593594757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:19.593912 containerd[1445]: time="2025-02-13T15:37:19.593661157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:19.608613 systemd[1]: Started cri-containerd-53f762a51db4708e1c416382c55c2bd6ee3c7337b15f4aa2c42a9024d995d02e.scope - libcontainer container 53f762a51db4708e1c416382c55c2bd6ee3c7337b15f4aa2c42a9024d995d02e. Feb 13 15:37:19.613688 systemd[1]: Started cri-containerd-020a67d57f1c96e8ad8b2276b2eb7aa70909978f91e83dc7e72dc3266381840e.scope - libcontainer container 020a67d57f1c96e8ad8b2276b2eb7aa70909978f91e83dc7e72dc3266381840e. Feb 13 15:37:19.620887 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:37:19.627487 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:37:19.643628 containerd[1445]: time="2025-02-13T15:37:19.643592117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-426nw,Uid:bc0fc116-115b-4e2e-8f8c-49cac06e78ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"53f762a51db4708e1c416382c55c2bd6ee3c7337b15f4aa2c42a9024d995d02e\"" Feb 13 15:37:19.644209 kubelet[2508]: E0213 15:37:19.644182 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:19.645931 containerd[1445]: time="2025-02-13T15:37:19.645868073Z" level=info msg="CreateContainer within sandbox \"53f762a51db4708e1c416382c55c2bd6ee3c7337b15f4aa2c42a9024d995d02e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:37:19.653813 containerd[1445]: time="2025-02-13T15:37:19.653778620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f5lf5,Uid:7114726e-9b81-43c3-8496-04721a458b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"020a67d57f1c96e8ad8b2276b2eb7aa70909978f91e83dc7e72dc3266381840e\"" Feb 13 15:37:19.654521 kubelet[2508]: E0213 15:37:19.654498 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:19.656476 containerd[1445]: time="2025-02-13T15:37:19.656352696Z" level=info msg="CreateContainer within sandbox \"020a67d57f1c96e8ad8b2276b2eb7aa70909978f91e83dc7e72dc3266381840e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:37:19.666908 containerd[1445]: time="2025-02-13T15:37:19.666695359Z" level=info msg="CreateContainer within sandbox \"53f762a51db4708e1c416382c55c2bd6ee3c7337b15f4aa2c42a9024d995d02e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f6c5a5be02edd088378990c2f790ab79e3167106132a22c772c020559b87a18\"" Feb 13 15:37:19.667284 containerd[1445]: time="2025-02-13T15:37:19.667123959Z" level=info msg="StartContainer for \"4f6c5a5be02edd088378990c2f790ab79e3167106132a22c772c020559b87a18\"" Feb 13 15:37:19.673901 containerd[1445]: time="2025-02-13T15:37:19.673850268Z" level=info msg="CreateContainer within sandbox \"020a67d57f1c96e8ad8b2276b2eb7aa70909978f91e83dc7e72dc3266381840e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ac01350e17fa48ea2bf80729522f7ed2902aca363abc9014a6d3c4362972320\"" Feb 13 15:37:19.675391 containerd[1445]: time="2025-02-13T15:37:19.674500947Z" level=info msg="StartContainer for \"3ac01350e17fa48ea2bf80729522f7ed2902aca363abc9014a6d3c4362972320\"" Feb 13 15:37:19.691414 systemd[1]: Started cri-containerd-4f6c5a5be02edd088378990c2f790ab79e3167106132a22c772c020559b87a18.scope - libcontainer container 4f6c5a5be02edd088378990c2f790ab79e3167106132a22c772c020559b87a18. Feb 13 15:37:19.694519 systemd[1]: Started cri-containerd-3ac01350e17fa48ea2bf80729522f7ed2902aca363abc9014a6d3c4362972320.scope - libcontainer container 3ac01350e17fa48ea2bf80729522f7ed2902aca363abc9014a6d3c4362972320. Feb 13 15:37:19.724731 containerd[1445]: time="2025-02-13T15:37:19.723992588Z" level=info msg="StartContainer for \"4f6c5a5be02edd088378990c2f790ab79e3167106132a22c772c020559b87a18\" returns successfully" Feb 13 15:37:19.732891 containerd[1445]: time="2025-02-13T15:37:19.730876057Z" level=info msg="StartContainer for \"3ac01350e17fa48ea2bf80729522f7ed2902aca363abc9014a6d3c4362972320\" returns successfully" Feb 13 15:37:20.336957 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:53538.service - OpenSSH per-connection server daemon (10.0.0.1:53538). Feb 13 15:37:20.347298 kubelet[2508]: E0213 15:37:20.347196 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:20.351181 kubelet[2508]: E0213 15:37:20.350722 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:20.379274 kubelet[2508]: I0213 15:37:20.376547 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-f5lf5" podStartSLOduration=23.376532844 podStartE2EDuration="23.376532844s" podCreationTimestamp="2025-02-13 15:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:20.374664647 +0000 UTC m=+31.203479840" watchObservedRunningTime="2025-02-13 15:37:20.376532844 +0000 UTC m=+31.205348037" Feb 13 15:37:20.379274 kubelet[2508]: I0213 15:37:20.376641 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-426nw" podStartSLOduration=23.376636484 podStartE2EDuration="23.376636484s" podCreationTimestamp="2025-02-13 15:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:20.358901151 +0000 UTC m=+31.187716384" watchObservedRunningTime="2025-02-13 15:37:20.376636484 +0000 UTC m=+31.205451637" Feb 13 15:37:20.399149 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 53538 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:20.401534 sshd-session[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:20.406065 systemd-logind[1423]: New session 9 of user core. Feb 13 15:37:20.412452 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:37:20.533933 sshd[3934]: Connection closed by 10.0.0.1 port 53538 Feb 13 15:37:20.534285 sshd-session[3926]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:20.537048 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:37:20.537192 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:53538.service: Deactivated successfully. Feb 13 15:37:20.538965 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:37:20.540823 systemd-logind[1423]: Removed session 9. Feb 13 15:37:20.590357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611536698.mount: Deactivated successfully. Feb 13 15:37:21.352509 kubelet[2508]: E0213 15:37:21.352371 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:21.352509 kubelet[2508]: E0213 15:37:21.352438 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:22.354600 kubelet[2508]: E0213 15:37:22.354467 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:22.361740 kubelet[2508]: E0213 15:37:22.361710 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:25.544867 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:33176.service - OpenSSH per-connection server daemon (10.0.0.1:33176). Feb 13 15:37:25.588930 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 33176 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:25.590131 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:25.593665 systemd-logind[1423]: New session 10 of user core. Feb 13 15:37:25.608408 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:37:25.717274 sshd[3950]: Connection closed by 10.0.0.1 port 33176 Feb 13 15:37:25.717718 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:25.721789 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:33176.service: Deactivated successfully. Feb 13 15:37:25.724675 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:37:25.725373 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:37:25.726412 systemd-logind[1423]: Removed session 10. Feb 13 15:37:27.338703 kubelet[2508]: E0213 15:37:27.338609 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:27.365059 kubelet[2508]: E0213 15:37:27.364948 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:30.734894 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:33180.service - OpenSSH per-connection server daemon (10.0.0.1:33180). Feb 13 15:37:30.782583 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 33180 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:30.783904 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:30.787655 systemd-logind[1423]: New session 11 of user core. Feb 13 15:37:30.798568 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:37:30.908102 sshd[3970]: Connection closed by 10.0.0.1 port 33180 Feb 13 15:37:30.909403 sshd-session[3968]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:30.919791 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:33180.service: Deactivated successfully. Feb 13 15:37:30.922615 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:37:30.924332 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:37:30.935541 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Feb 13 15:37:30.936530 systemd-logind[1423]: Removed session 11. Feb 13 15:37:30.976589 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:30.977850 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:30.982190 systemd-logind[1423]: New session 12 of user core. Feb 13 15:37:30.996406 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:37:31.155549 sshd[3985]: Connection closed by 10.0.0.1 port 33188 Feb 13 15:37:31.157060 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:31.168576 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:33188.service: Deactivated successfully. Feb 13 15:37:31.171125 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:37:31.175143 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:37:31.184532 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:33192.service - OpenSSH per-connection server daemon (10.0.0.1:33192). Feb 13 15:37:31.185385 systemd-logind[1423]: Removed session 12. Feb 13 15:37:31.227060 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 33192 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:31.228218 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:31.232002 systemd-logind[1423]: New session 13 of user core. Feb 13 15:37:31.242394 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:37:31.359048 sshd[3997]: Connection closed by 10.0.0.1 port 33192 Feb 13 15:37:31.358886 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:31.363289 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:33192.service: Deactivated successfully. Feb 13 15:37:31.364974 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:37:31.365643 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:37:31.366718 systemd-logind[1423]: Removed session 13. Feb 13 15:37:36.369024 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:42476.service - OpenSSH per-connection server daemon (10.0.0.1:42476). Feb 13 15:37:36.419350 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 42476 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:36.420671 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:36.424321 systemd-logind[1423]: New session 14 of user core. Feb 13 15:37:36.430403 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:37:36.541982 sshd[4013]: Connection closed by 10.0.0.1 port 42476 Feb 13 15:37:36.542355 sshd-session[4011]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:36.546187 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:42476.service: Deactivated successfully. Feb 13 15:37:36.548544 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:37:36.549338 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:37:36.550185 systemd-logind[1423]: Removed session 14. Feb 13 15:37:41.555928 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:42492.service - OpenSSH per-connection server daemon (10.0.0.1:42492). Feb 13 15:37:41.599603 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 42492 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:41.600830 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:41.604705 systemd-logind[1423]: New session 15 of user core. Feb 13 15:37:41.615440 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:37:41.720685 sshd[4027]: Connection closed by 10.0.0.1 port 42492 Feb 13 15:37:41.721319 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:41.730559 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:42492.service: Deactivated successfully. Feb 13 15:37:41.732044 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:37:41.733700 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:37:41.734945 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:42500.service - OpenSSH per-connection server daemon (10.0.0.1:42500). Feb 13 15:37:41.736092 systemd-logind[1423]: Removed session 15. Feb 13 15:37:41.778576 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 42500 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:41.779693 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:41.783194 systemd-logind[1423]: New session 16 of user core. Feb 13 15:37:41.790378 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:37:41.973698 sshd[4041]: Connection closed by 10.0.0.1 port 42500 Feb 13 15:37:41.974206 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:41.985648 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:42500.service: Deactivated successfully. Feb 13 15:37:41.987069 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:37:41.988403 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:37:41.989658 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:42512.service - OpenSSH per-connection server daemon (10.0.0.1:42512). Feb 13 15:37:41.991013 systemd-logind[1423]: Removed session 16. Feb 13 15:37:42.034821 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 42512 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:42.036035 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:42.039997 systemd-logind[1423]: New session 17 of user core. Feb 13 15:37:42.050394 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:43.291959 sshd[4054]: Connection closed by 10.0.0.1 port 42512 Feb 13 15:37:43.292653 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:43.300275 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:42512.service: Deactivated successfully. Feb 13 15:37:43.303357 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:43.307674 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:43.317736 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:58152.service - OpenSSH per-connection server daemon (10.0.0.1:58152). Feb 13 15:37:43.318646 systemd-logind[1423]: Removed session 17. Feb 13 15:37:43.361635 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:43.362914 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:43.366301 systemd-logind[1423]: New session 18 of user core. Feb 13 15:37:43.375391 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:37:43.604040 sshd[4076]: Connection closed by 10.0.0.1 port 58152 Feb 13 15:37:43.604387 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:43.618646 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:58152.service: Deactivated successfully. Feb 13 15:37:43.620679 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:37:43.622223 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:37:43.634552 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:58158.service - OpenSSH per-connection server daemon (10.0.0.1:58158). Feb 13 15:37:43.635662 systemd-logind[1423]: Removed session 18. Feb 13 15:37:43.678099 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 58158 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:43.679301 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:43.683043 systemd-logind[1423]: New session 19 of user core. Feb 13 15:37:43.693395 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:37:43.805112 sshd[4089]: Connection closed by 10.0.0.1 port 58158 Feb 13 15:37:43.804234 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:43.807410 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:58158.service: Deactivated successfully. Feb 13 15:37:43.808986 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:37:43.810720 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:37:43.811740 systemd-logind[1423]: Removed session 19. Feb 13 15:37:48.818677 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:58172.service - OpenSSH per-connection server daemon (10.0.0.1:58172). Feb 13 15:37:48.862473 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 58172 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:48.863554 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:48.867156 systemd-logind[1423]: New session 20 of user core. Feb 13 15:37:48.877376 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:37:48.982273 sshd[4106]: Connection closed by 10.0.0.1 port 58172 Feb 13 15:37:48.982558 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:48.985564 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:58172.service: Deactivated successfully. Feb 13 15:37:48.987724 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:37:48.988317 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:37:48.989356 systemd-logind[1423]: Removed session 20. Feb 13 15:37:53.996931 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:36106.service - OpenSSH per-connection server daemon (10.0.0.1:36106). Feb 13 15:37:54.040946 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 36106 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:54.042267 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:54.046175 systemd-logind[1423]: New session 21 of user core. Feb 13 15:37:54.057400 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:37:54.163574 sshd[4123]: Connection closed by 10.0.0.1 port 36106 Feb 13 15:37:54.163921 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:54.166835 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:36106.service: Deactivated successfully. Feb 13 15:37:54.169047 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:37:54.169709 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:37:54.171441 systemd-logind[1423]: Removed session 21. Feb 13 15:37:59.173886 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Feb 13 15:37:59.222575 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:59.223838 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:59.227529 systemd-logind[1423]: New session 22 of user core. Feb 13 15:37:59.237429 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:37:59.349059 sshd[4139]: Connection closed by 10.0.0.1 port 36114 Feb 13 15:37:59.349997 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:59.361585 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:36114.service: Deactivated successfully. Feb 13 15:37:59.362995 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:37:59.364629 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:37:59.373523 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Feb 13 15:37:59.378190 systemd-logind[1423]: Removed session 22. Feb 13 15:37:59.419377 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:59.421479 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:59.429321 systemd-logind[1423]: New session 23 of user core. Feb 13 15:37:59.437421 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:38:01.428269 containerd[1445]: time="2025-02-13T15:38:01.428208043Z" level=info msg="StopContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" with timeout 30 (s)" Feb 13 15:38:01.430293 containerd[1445]: time="2025-02-13T15:38:01.428566130Z" level=info msg="Stop container \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" with signal terminated" Feb 13 15:38:01.443465 systemd[1]: cri-containerd-1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3.scope: Deactivated successfully. Feb 13 15:38:01.464972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3-rootfs.mount: Deactivated successfully. Feb 13 15:38:01.472573 containerd[1445]: time="2025-02-13T15:38:01.472538223Z" level=info msg="StopContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" with timeout 2 (s)" Feb 13 15:38:01.473119 containerd[1445]: time="2025-02-13T15:38:01.473040873Z" level=info msg="Stop container \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" with signal terminated" Feb 13 15:38:01.474191 containerd[1445]: time="2025-02-13T15:38:01.473859008Z" level=info msg="shim disconnected" id=1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3 namespace=k8s.io Feb 13 15:38:01.474191 containerd[1445]: time="2025-02-13T15:38:01.474181254Z" level=warning msg="cleaning up after shim disconnected" id=1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3 namespace=k8s.io Feb 13 15:38:01.474191 containerd[1445]: time="2025-02-13T15:38:01.474191974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:01.480458 systemd-networkd[1375]: lxc_health: Link DOWN Feb 13 15:38:01.480464 systemd-networkd[1375]: lxc_health: Lost carrier Feb 13 15:38:01.504870 containerd[1445]: time="2025-02-13T15:38:01.504416694Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:38:01.505409 systemd[1]: cri-containerd-c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027.scope: Deactivated successfully. Feb 13 15:38:01.505725 systemd[1]: cri-containerd-c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027.scope: Consumed 6.507s CPU time. Feb 13 15:38:01.529982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027-rootfs.mount: Deactivated successfully. Feb 13 15:38:01.539095 containerd[1445]: time="2025-02-13T15:38:01.539022694Z" level=info msg="shim disconnected" id=c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027 namespace=k8s.io Feb 13 15:38:01.539095 containerd[1445]: time="2025-02-13T15:38:01.539078975Z" level=warning msg="cleaning up after shim disconnected" id=c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027 namespace=k8s.io Feb 13 15:38:01.539095 containerd[1445]: time="2025-02-13T15:38:01.539087855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:01.542090 containerd[1445]: time="2025-02-13T15:38:01.542044070Z" level=info msg="StopContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" returns successfully" Feb 13 15:38:01.545876 containerd[1445]: time="2025-02-13T15:38:01.545035765Z" level=info msg="StopPodSandbox for \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\"" Feb 13 15:38:01.548491 containerd[1445]: time="2025-02-13T15:38:01.548451629Z" level=info msg="Container to stop \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.550418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d-shm.mount: Deactivated successfully. Feb 13 15:38:01.554533 containerd[1445]: time="2025-02-13T15:38:01.554484420Z" level=info msg="StopContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" returns successfully" Feb 13 15:38:01.555442 containerd[1445]: time="2025-02-13T15:38:01.555405837Z" level=info msg="StopPodSandbox for \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\"" Feb 13 15:38:01.555502 containerd[1445]: time="2025-02-13T15:38:01.555451438Z" level=info msg="Container to stop \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.555502 containerd[1445]: time="2025-02-13T15:38:01.555463798Z" level=info msg="Container to stop \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.555502 containerd[1445]: time="2025-02-13T15:38:01.555471639Z" level=info msg="Container to stop \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.555502 containerd[1445]: time="2025-02-13T15:38:01.555481039Z" level=info msg="Container to stop \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.555502 containerd[1445]: time="2025-02-13T15:38:01.555489079Z" level=info msg="Container to stop \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:38:01.557992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356-shm.mount: Deactivated successfully. Feb 13 15:38:01.558693 systemd[1]: cri-containerd-5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d.scope: Deactivated successfully. Feb 13 15:38:01.570105 systemd[1]: cri-containerd-f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356.scope: Deactivated successfully. Feb 13 15:38:01.587833 containerd[1445]: time="2025-02-13T15:38:01.587689835Z" level=info msg="shim disconnected" id=5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d namespace=k8s.io Feb 13 15:38:01.587833 containerd[1445]: time="2025-02-13T15:38:01.587789117Z" level=warning msg="cleaning up after shim disconnected" id=5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d namespace=k8s.io Feb 13 15:38:01.587833 containerd[1445]: time="2025-02-13T15:38:01.587799757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:01.592657 containerd[1445]: time="2025-02-13T15:38:01.592531365Z" level=info msg="shim disconnected" id=f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356 namespace=k8s.io Feb 13 15:38:01.592657 containerd[1445]: time="2025-02-13T15:38:01.592594006Z" level=warning msg="cleaning up after shim disconnected" id=f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356 namespace=k8s.io Feb 13 15:38:01.592657 containerd[1445]: time="2025-02-13T15:38:01.592608686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:01.606369 containerd[1445]: time="2025-02-13T15:38:01.606193017Z" level=info msg="TearDown network for sandbox \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" successfully" Feb 13 15:38:01.606369 containerd[1445]: time="2025-02-13T15:38:01.606233618Z" level=info msg="StopPodSandbox for \"f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356\" returns successfully" Feb 13 15:38:01.608355 containerd[1445]: time="2025-02-13T15:38:01.608300776Z" level=info msg="TearDown network for sandbox \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\" successfully" Feb 13 15:38:01.608355 containerd[1445]: time="2025-02-13T15:38:01.608342737Z" level=info msg="StopPodSandbox for \"5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d\" returns successfully" Feb 13 15:38:01.790528 kubelet[2508]: I0213 15:38:01.790402 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-clustermesh-secrets\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.790528 kubelet[2508]: I0213 15:38:01.790497 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-kernel\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.790528 kubelet[2508]: I0213 15:38:01.790522 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-config-path\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.790539 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cni-path\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.790555 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-run\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.791209 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-net\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.791233 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hostproc\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.791266 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hubble-tls\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792082 kubelet[2508]: I0213 15:38:01.791294 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hzhs\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-kube-api-access-6hzhs\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791312 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-etc-cni-netd\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791327 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-lib-modules\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791341 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-xtables-lock\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791355 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-cgroup\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791373 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-cilium-config-path\") pod \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\" (UID: \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\") " Feb 13 15:38:01.792238 kubelet[2508]: I0213 15:38:01.791388 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-bpf-maps\") pod \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\" (UID: \"f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b\") " Feb 13 15:38:01.792396 kubelet[2508]: I0213 15:38:01.791430 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snh9r\" (UniqueName: \"kubernetes.io/projected/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-kube-api-access-snh9r\") pod \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\" (UID: \"9f6d5097-9e47-4b49-bddd-b48310d8ef4e\") " Feb 13 15:38:01.796180 kubelet[2508]: I0213 15:38:01.796113 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:38:01.801019 kubelet[2508]: I0213 15:38:01.800628 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.803027 kubelet[2508]: I0213 15:38:01.802347 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.803027 kubelet[2508]: I0213 15:38:01.802406 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.803027 kubelet[2508]: I0213 15:38:01.802424 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.803027 kubelet[2508]: I0213 15:38:01.802447 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.804023 kubelet[2508]: I0213 15:38:01.803966 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.804093 kubelet[2508]: I0213 15:38:01.804032 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.804093 kubelet[2508]: I0213 15:38:01.804051 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.804290 kubelet[2508]: I0213 15:38:01.804266 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.804401 kubelet[2508]: I0213 15:38:01.804387 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:38:01.805944 kubelet[2508]: I0213 15:38:01.805906 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:38:01.806472 kubelet[2508]: I0213 15:38:01.806444 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:38:01.806602 kubelet[2508]: I0213 15:38:01.806576 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-kube-api-access-snh9r" (OuterVolumeSpecName: "kube-api-access-snh9r") pod "9f6d5097-9e47-4b49-bddd-b48310d8ef4e" (UID: "9f6d5097-9e47-4b49-bddd-b48310d8ef4e"). InnerVolumeSpecName "kube-api-access-snh9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:38:01.806887 kubelet[2508]: I0213 15:38:01.806856 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f6d5097-9e47-4b49-bddd-b48310d8ef4e" (UID: "9f6d5097-9e47-4b49-bddd-b48310d8ef4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:38:01.807791 kubelet[2508]: I0213 15:38:01.807728 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-kube-api-access-6hzhs" (OuterVolumeSpecName: "kube-api-access-6hzhs") pod "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" (UID: "f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b"). InnerVolumeSpecName "kube-api-access-6hzhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892075 2508 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-snh9r\" (UniqueName: \"kubernetes.io/projected/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-kube-api-access-snh9r\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892115 2508 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892125 2508 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892134 2508 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892143 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892131 kubelet[2508]: I0213 15:38:01.892150 2508 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892158 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892167 2508 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892174 2508 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892181 2508 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6hzhs\" (UniqueName: \"kubernetes.io/projected/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-kube-api-access-6hzhs\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892188 2508 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892195 2508 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892202 2508 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892412 kubelet[2508]: I0213 15:38:01.892209 2508 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892576 kubelet[2508]: I0213 15:38:01.892216 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:01.892576 kubelet[2508]: I0213 15:38:01.892224 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f6d5097-9e47-4b49-bddd-b48310d8ef4e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:38:02.449013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee5134cdb9188f76be03d3e7055c6a0ea05c20fe26d67e6e35feae0e77de96d-rootfs.mount: Deactivated successfully. Feb 13 15:38:02.449113 systemd[1]: var-lib-kubelet-pods-9f6d5097\x2d9e47\x2d4b49\x2dbddd\x2db48310d8ef4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnh9r.mount: Deactivated successfully. Feb 13 15:38:02.449214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6ac3ac9b46f06fa777e673d6c95a9ce4948b2586a5044e7f64fd22b7a3f4356-rootfs.mount: Deactivated successfully. Feb 13 15:38:02.449287 systemd[1]: var-lib-kubelet-pods-f2e2d9bf\x2d64f3\x2d4fb1\x2d9bb4\x2da66b321d082b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hzhs.mount: Deactivated successfully. Feb 13 15:38:02.449388 systemd[1]: var-lib-kubelet-pods-f2e2d9bf\x2d64f3\x2d4fb1\x2d9bb4\x2da66b321d082b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:38:02.449475 systemd[1]: var-lib-kubelet-pods-f2e2d9bf\x2d64f3\x2d4fb1\x2d9bb4\x2da66b321d082b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:38:02.450053 kubelet[2508]: I0213 15:38:02.449634 2508 scope.go:117] "RemoveContainer" containerID="1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3" Feb 13 15:38:02.452161 containerd[1445]: time="2025-02-13T15:38:02.452106240Z" level=info msg="RemoveContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\"" Feb 13 15:38:02.455196 systemd[1]: Removed slice kubepods-besteffort-pod9f6d5097_9e47_4b49_bddd_b48310d8ef4e.slice - libcontainer container kubepods-besteffort-pod9f6d5097_9e47_4b49_bddd_b48310d8ef4e.slice. Feb 13 15:38:02.456148 containerd[1445]: time="2025-02-13T15:38:02.456045711Z" level=info msg="RemoveContainer for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" returns successfully" Feb 13 15:38:02.456436 systemd[1]: Removed slice kubepods-burstable-podf2e2d9bf_64f3_4fb1_9bb4_a66b321d082b.slice - libcontainer container kubepods-burstable-podf2e2d9bf_64f3_4fb1_9bb4_a66b321d082b.slice. Feb 13 15:38:02.456513 systemd[1]: kubepods-burstable-podf2e2d9bf_64f3_4fb1_9bb4_a66b321d082b.slice: Consumed 6.634s CPU time. Feb 13 15:38:02.456884 kubelet[2508]: I0213 15:38:02.456613 2508 scope.go:117] "RemoveContainer" containerID="1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3" Feb 13 15:38:02.457733 containerd[1445]: time="2025-02-13T15:38:02.457516977Z" level=error msg="ContainerStatus for \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\": not found" Feb 13 15:38:02.462766 kubelet[2508]: E0213 15:38:02.462741 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\": not found" containerID="1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3" Feb 13 15:38:02.462852 kubelet[2508]: I0213 15:38:02.462774 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3"} err="failed to get container status \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f94396bb7e032bc7b2fdc0db9b88f964b7de7f3461123c61192baf2d8f0a3\": not found" Feb 13 15:38:02.462852 kubelet[2508]: I0213 15:38:02.462851 2508 scope.go:117] "RemoveContainer" containerID="c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027" Feb 13 15:38:02.463878 containerd[1445]: time="2025-02-13T15:38:02.463847571Z" level=info msg="RemoveContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\"" Feb 13 15:38:02.473295 containerd[1445]: time="2025-02-13T15:38:02.473175979Z" level=info msg="RemoveContainer for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" returns successfully" Feb 13 15:38:02.473462 kubelet[2508]: I0213 15:38:02.473444 2508 scope.go:117] "RemoveContainer" containerID="a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0" Feb 13 15:38:02.474714 containerd[1445]: time="2025-02-13T15:38:02.474546763Z" level=info msg="RemoveContainer for \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\"" Feb 13 15:38:02.478783 containerd[1445]: time="2025-02-13T15:38:02.478753759Z" level=info msg="RemoveContainer for \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\" returns successfully" Feb 13 15:38:02.479101 kubelet[2508]: I0213 15:38:02.479073 2508 scope.go:117] "RemoveContainer" containerID="4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258" Feb 13 15:38:02.484012 containerd[1445]: time="2025-02-13T15:38:02.483904372Z" level=info msg="RemoveContainer for \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\"" Feb 13 15:38:02.486558 containerd[1445]: time="2025-02-13T15:38:02.486501938Z" level=info msg="RemoveContainer for \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\" returns successfully" Feb 13 15:38:02.486739 kubelet[2508]: I0213 15:38:02.486696 2508 scope.go:117] "RemoveContainer" containerID="0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079" Feb 13 15:38:02.491566 containerd[1445]: time="2025-02-13T15:38:02.491518189Z" level=info msg="RemoveContainer for \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\"" Feb 13 15:38:02.500459 containerd[1445]: time="2025-02-13T15:38:02.500411309Z" level=info msg="RemoveContainer for \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\" returns successfully" Feb 13 15:38:02.500698 kubelet[2508]: I0213 15:38:02.500663 2508 scope.go:117] "RemoveContainer" containerID="d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b" Feb 13 15:38:02.502860 containerd[1445]: time="2025-02-13T15:38:02.502814472Z" level=info msg="RemoveContainer for \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\"" Feb 13 15:38:02.507952 containerd[1445]: time="2025-02-13T15:38:02.507910003Z" level=info msg="RemoveContainer for \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\" returns successfully" Feb 13 15:38:02.509521 kubelet[2508]: I0213 15:38:02.509491 2508 scope.go:117] "RemoveContainer" containerID="c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027" Feb 13 15:38:02.509797 containerd[1445]: time="2025-02-13T15:38:02.509728796Z" level=error msg="ContainerStatus for \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\": not found" Feb 13 15:38:02.509883 kubelet[2508]: E0213 15:38:02.509857 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\": not found" containerID="c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027" Feb 13 15:38:02.512325 kubelet[2508]: I0213 15:38:02.509887 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027"} err="failed to get container status \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\": rpc error: code = NotFound desc = an error occurred when try to find container \"c181c252fc4f1259898f49fba3de5267782c5325498d0bdd0f5d39a5c428b027\": not found" Feb 13 15:38:02.512325 kubelet[2508]: I0213 15:38:02.512324 2508 scope.go:117] "RemoveContainer" containerID="a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0" Feb 13 15:38:02.513530 containerd[1445]: time="2025-02-13T15:38:02.513457623Z" level=error msg="ContainerStatus for \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\": not found" Feb 13 15:38:02.513613 kubelet[2508]: E0213 15:38:02.513589 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\": not found" containerID="a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0" Feb 13 15:38:02.513647 kubelet[2508]: I0213 15:38:02.513617 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0"} err="failed to get container status \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"a367762f3a19911ebf3c1c9debcf22f10a525060a0ad5f49494a79eb80d633d0\": not found" Feb 13 15:38:02.513647 kubelet[2508]: I0213 15:38:02.513636 2508 scope.go:117] "RemoveContainer" containerID="4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258" Feb 13 15:38:02.513861 containerd[1445]: time="2025-02-13T15:38:02.513809190Z" level=error msg="ContainerStatus for \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\": not found" Feb 13 15:38:02.513943 kubelet[2508]: E0213 15:38:02.513918 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\": not found" containerID="4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258" Feb 13 15:38:02.514002 kubelet[2508]: I0213 15:38:02.513974 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258"} err="failed to get container status \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c479af4c1a3090e4f84b4827c2fee5baa1a32235b918e8832fe0963b2516258\": not found" Feb 13 15:38:02.514002 kubelet[2508]: I0213 15:38:02.514000 2508 scope.go:117] "RemoveContainer" containerID="0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079" Feb 13 15:38:02.514214 containerd[1445]: time="2025-02-13T15:38:02.514182476Z" level=error msg="ContainerStatus for \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\": not found" Feb 13 15:38:02.514314 kubelet[2508]: E0213 15:38:02.514291 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\": not found" containerID="0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079" Feb 13 15:38:02.514349 kubelet[2508]: I0213 15:38:02.514316 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079"} err="failed to get container status \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a5a796814e40bce5ecec16fef392fe90f6998637ab0738285995bed85426079\": not found" Feb 13 15:38:02.514349 kubelet[2508]: I0213 15:38:02.514331 2508 scope.go:117] "RemoveContainer" containerID="d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b" Feb 13 15:38:02.514725 containerd[1445]: time="2025-02-13T15:38:02.514607524Z" level=error msg="ContainerStatus for \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\": not found" Feb 13 15:38:02.514789 kubelet[2508]: E0213 15:38:02.514762 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\": not found" containerID="d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b" Feb 13 15:38:02.514868 kubelet[2508]: I0213 15:38:02.514788 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b"} err="failed to get container status \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d66cc0421e444e77e09b31e5cc7c7b49ccc58aabb9b321daeea72d60f44bcd1b\": not found" Feb 13 15:38:03.264204 kubelet[2508]: I0213 15:38:03.264159 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6d5097-9e47-4b49-bddd-b48310d8ef4e" path="/var/lib/kubelet/pods/9f6d5097-9e47-4b49-bddd-b48310d8ef4e/volumes" Feb 13 15:38:03.264581 kubelet[2508]: I0213 15:38:03.264561 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" path="/var/lib/kubelet/pods/f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b/volumes" Feb 13 15:38:03.386197 sshd[4153]: Connection closed by 10.0.0.1 port 36116 Feb 13 15:38:03.387596 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:03.398207 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:36116.service: Deactivated successfully. Feb 13 15:38:03.400441 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:38:03.400752 systemd[1]: session-23.scope: Consumed 1.311s CPU time. Feb 13 15:38:03.402184 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:38:03.409605 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:55664.service - OpenSSH per-connection server daemon (10.0.0.1:55664). Feb 13 15:38:03.410550 systemd-logind[1423]: Removed session 23. Feb 13 15:38:03.449433 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 55664 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:38:03.450865 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:03.455156 systemd-logind[1423]: New session 24 of user core. Feb 13 15:38:03.464459 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:38:04.278654 sshd[4315]: Connection closed by 10.0.0.1 port 55664 Feb 13 15:38:04.279156 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:04.291901 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:55664.service: Deactivated successfully. Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294710 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="cilium-agent" Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294742 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f6d5097-9e47-4b49-bddd-b48310d8ef4e" containerName="cilium-operator" Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294751 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="clean-cilium-state" Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294758 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="mount-cgroup" Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294764 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="apply-sysctl-overwrites" Feb 13 15:38:04.294751 kubelet[2508]: E0213 15:38:04.294770 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="mount-bpf-fs" Feb 13 15:38:04.294829 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:38:04.298172 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:38:04.303078 kubelet[2508]: I0213 15:38:04.303033 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e2d9bf-64f3-4fb1-9bb4-a66b321d082b" containerName="cilium-agent" Feb 13 15:38:04.303078 kubelet[2508]: I0213 15:38:04.303070 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6d5097-9e47-4b49-bddd-b48310d8ef4e" containerName="cilium-operator" Feb 13 15:38:04.307556 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:55678.service - OpenSSH per-connection server daemon (10.0.0.1:55678). Feb 13 15:38:04.308865 systemd-logind[1423]: Removed session 24. Feb 13 15:38:04.310099 kubelet[2508]: E0213 15:38:04.310058 2508 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:38:04.319520 systemd[1]: Created slice kubepods-burstable-pod7078fd53_46ce_4b07_990b_4073312dc0f0.slice - libcontainer container kubepods-burstable-pod7078fd53_46ce_4b07_990b_4073312dc0f0.slice. Feb 13 15:38:04.359405 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 55678 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:38:04.360538 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:04.363927 systemd-logind[1423]: New session 25 of user core. Feb 13 15:38:04.370403 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:38:04.405500 kubelet[2508]: I0213 15:38:04.405454 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-host-proc-sys-net\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405500 kubelet[2508]: I0213 15:38:04.405502 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7078fd53-46ce-4b07-990b-4073312dc0f0-hubble-tls\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405523 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-hostproc\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405539 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7078fd53-46ce-4b07-990b-4073312dc0f0-cilium-config-path\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405553 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6xg\" (UniqueName: \"kubernetes.io/projected/7078fd53-46ce-4b07-990b-4073312dc0f0-kube-api-access-dw6xg\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405567 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-bpf-maps\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405581 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7078fd53-46ce-4b07-990b-4073312dc0f0-clustermesh-secrets\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405599 kubelet[2508]: I0213 15:38:04.405595 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-cilium-cgroup\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405611 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-cilium-run\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405625 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-etc-cni-netd\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405641 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7078fd53-46ce-4b07-990b-4073312dc0f0-cilium-ipsec-secrets\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405654 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-host-proc-sys-kernel\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405671 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-cni-path\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405797 kubelet[2508]: I0213 15:38:04.405684 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-lib-modules\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.405953 kubelet[2508]: I0213 15:38:04.405701 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7078fd53-46ce-4b07-990b-4073312dc0f0-xtables-lock\") pod \"cilium-s5mmh\" (UID: \"7078fd53-46ce-4b07-990b-4073312dc0f0\") " pod="kube-system/cilium-s5mmh" Feb 13 15:38:04.420954 sshd[4328]: Connection closed by 10.0.0.1 port 55678 Feb 13 15:38:04.421273 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:04.434693 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:55678.service: Deactivated successfully. Feb 13 15:38:04.436133 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:38:04.437461 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:38:04.445496 systemd[1]: Started sshd@25-10.0.0.131:22-10.0.0.1:55684.service - OpenSSH per-connection server daemon (10.0.0.1:55684). Feb 13 15:38:04.446391 systemd-logind[1423]: Removed session 25. Feb 13 15:38:04.486828 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 55684 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:38:04.488076 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:04.491630 systemd-logind[1423]: New session 26 of user core. Feb 13 15:38:04.505452 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:38:04.626578 kubelet[2508]: E0213 15:38:04.626467 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:04.639147 containerd[1445]: time="2025-02-13T15:38:04.639100191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5mmh,Uid:7078fd53-46ce-4b07-990b-4073312dc0f0,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:04.658535 containerd[1445]: time="2025-02-13T15:38:04.658449159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:04.658535 containerd[1445]: time="2025-02-13T15:38:04.658505480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:04.658535 containerd[1445]: time="2025-02-13T15:38:04.658516880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:04.658762 containerd[1445]: time="2025-02-13T15:38:04.658589962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:04.679432 systemd[1]: Started cri-containerd-7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0.scope - libcontainer container 7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0. Feb 13 15:38:04.718317 containerd[1445]: time="2025-02-13T15:38:04.717943090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5mmh,Uid:7078fd53-46ce-4b07-990b-4073312dc0f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\"" Feb 13 15:38:04.719280 kubelet[2508]: E0213 15:38:04.718616 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:04.720784 containerd[1445]: time="2025-02-13T15:38:04.720747378Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:38:04.732497 containerd[1445]: time="2025-02-13T15:38:04.732446136Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549\"" Feb 13 15:38:04.733115 containerd[1445]: time="2025-02-13T15:38:04.733027466Z" level=info msg="StartContainer for \"b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549\"" Feb 13 15:38:04.759402 systemd[1]: Started cri-containerd-b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549.scope - libcontainer container b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549. Feb 13 15:38:04.780499 containerd[1445]: time="2025-02-13T15:38:04.780378070Z" level=info msg="StartContainer for \"b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549\" returns successfully" Feb 13 15:38:04.789560 systemd[1]: cri-containerd-b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549.scope: Deactivated successfully. Feb 13 15:38:04.818046 containerd[1445]: time="2025-02-13T15:38:04.817835907Z" level=info msg="shim disconnected" id=b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549 namespace=k8s.io Feb 13 15:38:04.818046 containerd[1445]: time="2025-02-13T15:38:04.817888948Z" level=warning msg="cleaning up after shim disconnected" id=b304073004382a1608fbd0f3d57bf3835d077eff53614b5d8263e51c756b3549 namespace=k8s.io Feb 13 15:38:04.818046 containerd[1445]: time="2025-02-13T15:38:04.817896908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:05.458261 kubelet[2508]: E0213 15:38:05.458226 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:05.460947 containerd[1445]: time="2025-02-13T15:38:05.460745170Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:38:05.472545 containerd[1445]: time="2025-02-13T15:38:05.472502525Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013\"" Feb 13 15:38:05.473527 containerd[1445]: time="2025-02-13T15:38:05.473499461Z" level=info msg="StartContainer for \"ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013\"" Feb 13 15:38:05.500398 systemd[1]: Started cri-containerd-ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013.scope - libcontainer container ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013. Feb 13 15:38:05.525602 containerd[1445]: time="2025-02-13T15:38:05.525274756Z" level=info msg="StartContainer for \"ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013\" returns successfully" Feb 13 15:38:05.531270 systemd[1]: cri-containerd-ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013.scope: Deactivated successfully. Feb 13 15:38:05.546793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013-rootfs.mount: Deactivated successfully. Feb 13 15:38:05.562819 containerd[1445]: time="2025-02-13T15:38:05.562745974Z" level=info msg="shim disconnected" id=ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013 namespace=k8s.io Feb 13 15:38:05.562819 containerd[1445]: time="2025-02-13T15:38:05.562811056Z" level=warning msg="cleaning up after shim disconnected" id=ab3b10394e833a2f5ce900a4a7f45a8f758dcb89d016bb65ffd05dcec18ee013 namespace=k8s.io Feb 13 15:38:05.562819 containerd[1445]: time="2025-02-13T15:38:05.562819256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:06.461677 kubelet[2508]: E0213 15:38:06.461643 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:06.464288 containerd[1445]: time="2025-02-13T15:38:06.464123725Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:38:06.477575 containerd[1445]: time="2025-02-13T15:38:06.477517060Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286\"" Feb 13 15:38:06.478943 containerd[1445]: time="2025-02-13T15:38:06.478469315Z" level=info msg="StartContainer for \"a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286\"" Feb 13 15:38:06.507404 systemd[1]: Started cri-containerd-a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286.scope - libcontainer container a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286. Feb 13 15:38:06.535699 systemd[1]: cri-containerd-a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286.scope: Deactivated successfully. Feb 13 15:38:06.537600 containerd[1445]: time="2025-02-13T15:38:06.537551903Z" level=info msg="StartContainer for \"a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286\" returns successfully" Feb 13 15:38:06.561648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286-rootfs.mount: Deactivated successfully. Feb 13 15:38:06.566583 containerd[1445]: time="2025-02-13T15:38:06.566497848Z" level=info msg="shim disconnected" id=a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286 namespace=k8s.io Feb 13 15:38:06.566583 containerd[1445]: time="2025-02-13T15:38:06.566552648Z" level=warning msg="cleaning up after shim disconnected" id=a6f0cb2006e564c16379ecf501f5b05bdc179597979a8d8935077a526dc70286 namespace=k8s.io Feb 13 15:38:06.566583 containerd[1445]: time="2025-02-13T15:38:06.566562009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:07.465481 kubelet[2508]: E0213 15:38:07.465453 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:07.470277 containerd[1445]: time="2025-02-13T15:38:07.468382713Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:38:07.480152 containerd[1445]: time="2025-02-13T15:38:07.480104936Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663\"" Feb 13 15:38:07.480644 containerd[1445]: time="2025-02-13T15:38:07.480582383Z" level=info msg="StartContainer for \"46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663\"" Feb 13 15:38:07.510445 systemd[1]: Started cri-containerd-46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663.scope - libcontainer container 46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663. Feb 13 15:38:07.530581 systemd[1]: cri-containerd-46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663.scope: Deactivated successfully. Feb 13 15:38:07.532541 containerd[1445]: time="2025-02-13T15:38:07.532495713Z" level=info msg="StartContainer for \"46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663\" returns successfully" Feb 13 15:38:07.552900 containerd[1445]: time="2025-02-13T15:38:07.552825190Z" level=info msg="shim disconnected" id=46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663 namespace=k8s.io Feb 13 15:38:07.552900 containerd[1445]: time="2025-02-13T15:38:07.552888471Z" level=warning msg="cleaning up after shim disconnected" id=46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663 namespace=k8s.io Feb 13 15:38:07.552900 containerd[1445]: time="2025-02-13T15:38:07.552899351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:07.562214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46bea15549db24c7800f0d598c9e3f702dcbe8cbec637fe5c0d5d955fd197663-rootfs.mount: Deactivated successfully. Feb 13 15:38:08.469259 kubelet[2508]: E0213 15:38:08.469193 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:08.472369 containerd[1445]: time="2025-02-13T15:38:08.472313771Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:38:08.484604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854012566.mount: Deactivated successfully. Feb 13 15:38:08.488462 containerd[1445]: time="2025-02-13T15:38:08.488403255Z" level=info msg="CreateContainer within sandbox \"7a055cd166481f2bdd01e0585e32bcd8243dcabfeaa54dfd55e439423cdf2cf0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901\"" Feb 13 15:38:08.489383 containerd[1445]: time="2025-02-13T15:38:08.489180507Z" level=info msg="StartContainer for \"35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901\"" Feb 13 15:38:08.511414 systemd[1]: Started cri-containerd-35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901.scope - libcontainer container 35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901. Feb 13 15:38:08.538322 containerd[1445]: time="2025-02-13T15:38:08.538279652Z" level=info msg="StartContainer for \"35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901\" returns successfully" Feb 13 15:38:08.562276 systemd[1]: run-containerd-runc-k8s.io-35be2b51c27a5ca2a39d696a0c70ac46d1dbfbde07dece09fd1888c47899e901-runc.jvRmCm.mount: Deactivated successfully. Feb 13 15:38:08.799434 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:38:09.474268 kubelet[2508]: E0213 15:38:09.474166 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:09.492806 kubelet[2508]: I0213 15:38:09.492711 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s5mmh" podStartSLOduration=5.492680764 podStartE2EDuration="5.492680764s" podCreationTimestamp="2025-02-13 15:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:09.492507241 +0000 UTC m=+80.321322474" watchObservedRunningTime="2025-02-13 15:38:09.492680764 +0000 UTC m=+80.321495957" Feb 13 15:38:10.627991 kubelet[2508]: E0213 15:38:10.627923 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:11.707131 systemd-networkd[1375]: lxc_health: Link UP Feb 13 15:38:11.721187 systemd-networkd[1375]: lxc_health: Gained carrier Feb 13 15:38:12.262144 kubelet[2508]: E0213 15:38:12.262086 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:12.628945 kubelet[2508]: E0213 15:38:12.628636 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:13.262669 kubelet[2508]: E0213 15:38:13.261615 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:13.481233 kubelet[2508]: E0213 15:38:13.481205 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:13.609347 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 13 15:38:14.482634 kubelet[2508]: E0213 15:38:14.482582 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:16.262010 kubelet[2508]: E0213 15:38:16.261972 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:17.203755 sshd[4339]: Connection closed by 10.0.0.1 port 55684 Feb 13 15:38:17.204637 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:17.208000 systemd[1]: sshd@25-10.0.0.131:22-10.0.0.1:55684.service: Deactivated successfully. Feb 13 15:38:17.209619 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:38:17.210788 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:38:17.212678 systemd-logind[1423]: Removed session 26.