Feb 13 19:17:18.929623 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:17:18.929645 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:17:18.929655 kernel: KASLR enabled Feb 13 19:17:18.929661 kernel: efi: EFI v2.7 by EDK II Feb 13 19:17:18.929667 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:17:18.929673 kernel: random: crng init done Feb 13 19:17:18.929680 kernel: secureboot: Secure boot disabled Feb 13 19:17:18.929686 kernel: ACPI: Early table checksum verification disabled Feb 13 19:17:18.929693 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:17:18.929700 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:17:18.929706 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929712 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929719 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929725 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929732 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929740 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929747 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929753 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929764 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:18.929771 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:17:18.929777 kernel: NUMA: Failed to initialise from firmware Feb 13 19:17:18.929784 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:18.929790 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:17:18.929797 kernel: Zone ranges: Feb 13 19:17:18.929803 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:18.929811 kernel: DMA32 empty Feb 13 19:17:18.929817 kernel: Normal empty Feb 13 19:17:18.929823 kernel: Movable zone start for each node Feb 13 19:17:18.929830 kernel: Early memory node ranges Feb 13 19:17:18.929836 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:17:18.929843 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:17:18.929849 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:17:18.929856 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:17:18.929862 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:17:18.929868 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:17:18.929874 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:17:18.929880 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:18.929887 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:17:18.929894 kernel: psci: probing for conduit method from ACPI. Feb 13 19:17:18.929902 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:17:18.929911 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:17:18.929917 kernel: psci: Trusted OS migration not required Feb 13 19:17:18.929924 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:17:18.929932 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:17:18.929939 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:17:18.929946 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:17:18.929953 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:17:18.929959 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:17:18.929966 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:17:18.929972 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:17:18.929979 kernel: CPU features: detected: Spectre-v4 Feb 13 19:17:18.929985 kernel: CPU features: detected: Spectre-BHB Feb 13 19:17:18.929992 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:17:18.930000 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:17:18.930006 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:17:18.930013 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:17:18.930020 kernel: alternatives: applying boot alternatives Feb 13 19:17:18.930027 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:17:18.930034 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:17:18.930041 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:17:18.930047 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:17:18.930054 kernel: Fallback order for Node 0: 0 Feb 13 19:17:18.930061 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:17:18.930068 kernel: Policy zone: DMA Feb 13 19:17:18.930075 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:17:18.930082 kernel: software IO TLB: area num 4. Feb 13 19:17:18.930088 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:17:18.930095 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 19:17:18.930102 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:17:18.930108 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:17:18.930116 kernel: rcu: RCU event tracing is enabled. Feb 13 19:17:18.930122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:17:18.930129 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:17:18.930135 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:17:18.930142 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:17:18.930148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:17:18.930157 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:17:18.930164 kernel: GICv3: 256 SPIs implemented Feb 13 19:17:18.930170 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:17:18.930177 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:17:18.930184 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:17:18.930191 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:17:18.930197 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:17:18.930204 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:17:18.930218 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:17:18.930230 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:17:18.930236 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:17:18.930244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:17:18.930251 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:18.930258 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:17:18.930265 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:17:18.930272 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:17:18.930279 kernel: arm-pv: using stolen time PV Feb 13 19:17:18.930286 kernel: Console: colour dummy device 80x25 Feb 13 19:17:18.930292 kernel: ACPI: Core revision 20230628 Feb 13 19:17:18.930299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:17:18.930311 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:17:18.930321 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:17:18.930328 kernel: landlock: Up and running. Feb 13 19:17:18.930336 kernel: SELinux: Initializing. Feb 13 19:17:18.930342 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:17:18.930350 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:17:18.930357 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:17:18.930364 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:17:18.930371 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:17:18.930389 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:17:18.930399 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:17:18.930406 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:17:18.930429 kernel: Remapping and enabling EFI services. Feb 13 19:17:18.930436 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:17:18.930442 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:17:18.930449 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:17:18.930456 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:17:18.930463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:18.930469 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:17:18.930476 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:17:18.930485 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:17:18.930492 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:17:18.930503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:18.930512 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:17:18.930518 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:17:18.930526 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:17:18.930533 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:17:18.930540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:18.930547 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:17:18.930555 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:17:18.930562 kernel: SMP: Total of 4 processors activated. Feb 13 19:17:18.930570 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:17:18.930577 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:17:18.930584 kernel: CPU features: detected: Common not Private translations Feb 13 19:17:18.930591 kernel: CPU features: detected: CRC32 instructions Feb 13 19:17:18.930598 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:17:18.930606 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:17:18.930614 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:17:18.930622 kernel: CPU features: detected: Privileged Access Never Feb 13 19:17:18.930629 kernel: CPU features: detected: RAS Extension Support Feb 13 19:17:18.930650 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:17:18.930658 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:17:18.930667 kernel: alternatives: applying system-wide alternatives Feb 13 19:17:18.930677 kernel: devtmpfs: initialized Feb 13 19:17:18.930687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:17:18.930697 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:17:18.930707 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:17:18.930714 kernel: SMBIOS 3.0.0 present. Feb 13 19:17:18.930721 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:17:18.930728 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:17:18.930735 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:17:18.930743 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:17:18.930750 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:17:18.930761 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:17:18.930769 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:17:18.930778 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:17:18.930785 kernel: cpuidle: using governor menu Feb 13 19:17:18.930792 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:17:18.930799 kernel: ASID allocator initialised with 32768 entries Feb 13 19:17:18.930806 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:17:18.930813 kernel: Serial: AMBA PL011 UART driver Feb 13 19:17:18.930820 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:17:18.930828 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:17:18.930835 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:17:18.930844 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:17:18.930851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:17:18.930858 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:17:18.930865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:17:18.930872 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:17:18.930879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:17:18.930886 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:17:18.930894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:17:18.930900 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:17:18.930909 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:17:18.930916 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:17:18.930923 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:17:18.930930 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:17:18.930937 kernel: ACPI: Interpreter enabled Feb 13 19:17:18.930944 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:17:18.930951 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:17:18.930958 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:17:18.930965 kernel: printk: console [ttyAMA0] enabled Feb 13 19:17:18.930973 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:17:18.931107 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:17:18.931178 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:17:18.931246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:17:18.931321 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:17:18.931429 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:17:18.931440 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:17:18.931452 kernel: PCI host bridge to bus 0000:00 Feb 13 19:17:18.931526 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:17:18.931587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:17:18.931645 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:17:18.931702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:17:18.931789 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:17:18.931874 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:17:18.931945 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:17:18.932012 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:17:18.932077 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:17:18.932142 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:17:18.932207 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:17:18.932273 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:17:18.932352 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:17:18.932446 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:17:18.932511 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:17:18.932521 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:17:18.932529 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:17:18.932536 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:17:18.932543 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:17:18.932551 kernel: iommu: Default domain type: Translated Feb 13 19:17:18.932558 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:17:18.932567 kernel: efivars: Registered efivars operations Feb 13 19:17:18.932574 kernel: vgaarb: loaded Feb 13 19:17:18.932581 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:17:18.932588 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:17:18.932596 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:17:18.932603 kernel: pnp: PnP ACPI init Feb 13 19:17:18.932673 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:17:18.932684 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:17:18.932693 kernel: NET: Registered PF_INET protocol family Feb 13 19:17:18.932700 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:17:18.932708 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:17:18.932715 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:17:18.932722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:17:18.932729 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:17:18.932736 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:17:18.932743 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:17:18.932750 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:17:18.932763 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:17:18.932770 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:17:18.932777 kernel: kvm [1]: HYP mode not available Feb 13 19:17:18.932784 kernel: Initialise system trusted keyrings Feb 13 19:17:18.932792 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:17:18.932799 kernel: Key type asymmetric registered Feb 13 19:17:18.932806 kernel: Asymmetric key parser 'x509' registered Feb 13 19:17:18.932813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:17:18.932820 kernel: io scheduler mq-deadline registered Feb 13 19:17:18.932829 kernel: io scheduler kyber registered Feb 13 19:17:18.932836 kernel: io scheduler bfq registered Feb 13 19:17:18.932843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:17:18.932850 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:17:18.932858 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:17:18.932927 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:17:18.932937 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:17:18.932944 kernel: thunder_xcv, ver 1.0 Feb 13 19:17:18.932951 kernel: thunder_bgx, ver 1.0 Feb 13 19:17:18.932960 kernel: nicpf, ver 1.0 Feb 13 19:17:18.932967 kernel: nicvf, ver 1.0 Feb 13 19:17:18.933039 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:17:18.933101 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:17:18 UTC (1739474238) Feb 13 19:17:18.933111 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:17:18.933118 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:17:18.933125 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:17:18.933133 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:17:18.933142 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:17:18.933149 kernel: Segment Routing with IPv6 Feb 13 19:17:18.933156 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:17:18.933163 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:17:18.933170 kernel: Key type dns_resolver registered Feb 13 19:17:18.933177 kernel: registered taskstats version 1 Feb 13 19:17:18.933184 kernel: Loading compiled-in X.509 certificates Feb 13 19:17:18.933191 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:17:18.933198 kernel: Key type .fscrypt registered Feb 13 19:17:18.933206 kernel: Key type fscrypt-provisioning registered Feb 13 19:17:18.933214 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:17:18.933221 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:17:18.933228 kernel: ima: No architecture policies found Feb 13 19:17:18.933235 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:17:18.933242 kernel: clk: Disabling unused clocks Feb 13 19:17:18.933249 kernel: Freeing unused kernel memory: 39680K Feb 13 19:17:18.933256 kernel: Run /init as init process Feb 13 19:17:18.933266 kernel: with arguments: Feb 13 19:17:18.933274 kernel: /init Feb 13 19:17:18.933281 kernel: with environment: Feb 13 19:17:18.933287 kernel: HOME=/ Feb 13 19:17:18.933294 kernel: TERM=linux Feb 13 19:17:18.933301 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:17:18.933316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:17:18.933326 systemd[1]: Detected virtualization kvm. Feb 13 19:17:18.933333 systemd[1]: Detected architecture arm64. Feb 13 19:17:18.933343 systemd[1]: Running in initrd. Feb 13 19:17:18.933350 systemd[1]: No hostname configured, using default hostname. Feb 13 19:17:18.933358 systemd[1]: Hostname set to . Feb 13 19:17:18.933366 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:17:18.933374 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:17:18.933451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:18.933460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:18.933468 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:17:18.933478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:18.933486 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:17:18.933494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:17:18.933503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:17:18.933511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:17:18.933518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:18.933526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:18.933536 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:18.933547 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:18.933555 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:18.933563 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:18.933571 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:18.933578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:18.933586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:17:18.933593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:17:18.933604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:18.933612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:18.933619 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:18.933627 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:18.933635 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:17:18.933642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:18.933650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:17:18.933658 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:17:18.933665 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:18.933675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:18.933682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:18.933690 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:18.933698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:18.933705 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:17:18.933713 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:17:18.933746 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:17:18.933768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:18.933779 systemd-journald[238]: Journal started Feb 13 19:17:18.933801 systemd-journald[238]: Runtime Journal (/run/log/journal/c170e984d678410a9b34fab3c6afcbe9) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:17:18.926460 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:17:18.936703 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:18.937144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:18.942404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:17:18.943831 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:17:18.944879 kernel: Bridge firewalling registered Feb 13 19:17:18.947556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:18.949331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:18.951598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:18.953495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:18.957357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:18.961010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:18.966584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:18.969127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:18.975611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:18.976911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:18.980428 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:17:18.994234 dracut-cmdline[283]: dracut-dracut-053 Feb 13 19:17:18.996807 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:17:19.011154 systemd-resolved[279]: Positive Trust Anchors: Feb 13 19:17:19.011234 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:19.011265 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:19.016069 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 19:17:19.017015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:19.021359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:19.072409 kernel: SCSI subsystem initialized Feb 13 19:17:19.077399 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:17:19.084415 kernel: iscsi: registered transport (tcp) Feb 13 19:17:19.097423 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:17:19.097445 kernel: QLogic iSCSI HBA Driver Feb 13 19:17:19.140156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:19.152545 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:17:19.170928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:17:19.170976 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:17:19.172598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:17:19.218412 kernel: raid6: neonx8 gen() 15795 MB/s Feb 13 19:17:19.235399 kernel: raid6: neonx4 gen() 15650 MB/s Feb 13 19:17:19.252401 kernel: raid6: neonx2 gen() 13207 MB/s Feb 13 19:17:19.269401 kernel: raid6: neonx1 gen() 10478 MB/s Feb 13 19:17:19.286404 kernel: raid6: int64x8 gen() 6959 MB/s Feb 13 19:17:19.303401 kernel: raid6: int64x4 gen() 7331 MB/s Feb 13 19:17:19.320400 kernel: raid6: int64x2 gen() 6108 MB/s Feb 13 19:17:19.337504 kernel: raid6: int64x1 gen() 5034 MB/s Feb 13 19:17:19.337517 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Feb 13 19:17:19.355567 kernel: raid6: .... xor() 11901 MB/s, rmw enabled Feb 13 19:17:19.355581 kernel: raid6: using neon recovery algorithm Feb 13 19:17:19.360407 kernel: xor: measuring software checksum speed Feb 13 19:17:19.361676 kernel: 8regs : 16705 MB/sec Feb 13 19:17:19.361690 kernel: 32regs : 19631 MB/sec Feb 13 19:17:19.362989 kernel: arm64_neon : 25972 MB/sec Feb 13 19:17:19.363009 kernel: xor: using function: arm64_neon (25972 MB/sec) Feb 13 19:17:19.413410 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:17:19.426030 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:19.434533 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:19.447457 systemd-udevd[464]: Using default interface naming scheme 'v255'. Feb 13 19:17:19.450579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:19.454174 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:17:19.468738 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 19:17:19.495728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:19.506527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:19.545175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:19.555455 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:17:19.566184 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:19.567788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:19.570663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:19.572073 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:19.582534 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:17:19.593002 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:17:19.607366 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:17:19.607487 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:17:19.607500 kernel: GPT:9289727 != 19775487 Feb 13 19:17:19.607509 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:17:19.607518 kernel: GPT:9289727 != 19775487 Feb 13 19:17:19.607526 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:17:19.607543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:19.596646 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:19.610127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:19.610201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:19.612366 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:19.615454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:19.615521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:19.620567 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:19.626518 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 19:17:19.626542 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (519) Feb 13 19:17:19.632559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:19.639584 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:17:19.644130 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:17:19.646428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:19.659815 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:17:19.663812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:17:19.665022 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:17:19.680579 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:17:19.682466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:19.687745 disk-uuid[553]: Primary Header is updated. Feb 13 19:17:19.687745 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:17:19.687745 disk-uuid[553]: Secondary Header is updated. Feb 13 19:17:19.690924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:19.702237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:20.702893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:20.702970 disk-uuid[554]: The operation has completed successfully. Feb 13 19:17:20.721422 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:17:20.721519 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:17:20.742530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:17:20.745329 sh[574]: Success Feb 13 19:17:20.761402 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:17:20.788968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:17:20.805718 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:17:20.808021 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:17:20.817401 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:17:20.817432 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:20.817443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:17:20.819854 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:17:20.819870 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:17:20.823330 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:17:20.824641 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:17:20.833503 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:17:20.835589 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:17:20.841847 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:17:20.841885 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:20.841895 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:20.845402 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:20.851725 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:17:20.853434 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:17:20.858625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:17:20.865576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:17:20.925136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:20.941519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:20.958915 ignition[665]: Ignition 2.20.0 Feb 13 19:17:20.958925 ignition[665]: Stage: fetch-offline Feb 13 19:17:20.959081 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:20.959091 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:20.959317 ignition[665]: parsed url from cmdline: "" Feb 13 19:17:20.959320 ignition[665]: no config URL provided Feb 13 19:17:20.959325 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:17:20.959333 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:17:20.959358 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 19:17:20.959363 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:17:20.971111 systemd-networkd[768]: lo: Link UP Feb 13 19:17:20.971137 systemd-networkd[768]: lo: Gained carrier Feb 13 19:17:20.972442 systemd-networkd[768]: Enumeration completed Feb 13 19:17:20.971985 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 19:17:20.972536 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:20.972872 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:20.972876 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:17:20.974437 systemd[1]: Reached target network.target - Network. Feb 13 19:17:20.976485 systemd-networkd[768]: eth0: Link UP Feb 13 19:17:20.976489 systemd-networkd[768]: eth0: Gained carrier Feb 13 19:17:20.976497 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:21.001435 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:17:21.019604 ignition[665]: parsing config with SHA512: 03234c42b8f6eede29976088f1d4a113638aee6a0b16a310a6c07e9cc14619d7c2b74a259f78cc3b851f4d5bd7f876b5c4a1ea0164195032b470e13b3e829ac2 Feb 13 19:17:21.024207 unknown[665]: fetched base config from "system" Feb 13 19:17:21.024217 unknown[665]: fetched user config from "qemu" Feb 13 19:17:21.025597 ignition[665]: fetch-offline: fetch-offline passed Feb 13 19:17:21.025710 ignition[665]: Ignition finished successfully Feb 13 19:17:21.028145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:21.029507 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:17:21.037534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:17:21.046972 ignition[776]: Ignition 2.20.0 Feb 13 19:17:21.046982 ignition[776]: Stage: kargs Feb 13 19:17:21.047129 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:21.047139 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:21.047991 ignition[776]: kargs: kargs passed Feb 13 19:17:21.049665 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:17:21.048028 ignition[776]: Ignition finished successfully Feb 13 19:17:21.052521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:17:21.064543 ignition[785]: Ignition 2.20.0 Feb 13 19:17:21.064553 ignition[785]: Stage: disks Feb 13 19:17:21.064694 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:21.064703 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:21.067634 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:17:21.065627 ignition[785]: disks: disks passed Feb 13 19:17:21.069593 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:21.065668 ignition[785]: Ignition finished successfully Feb 13 19:17:21.071052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:17:21.072636 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:21.074331 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:21.075827 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:21.084584 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:17:21.093514 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:17:21.097104 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:17:21.105474 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:17:21.144403 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:17:21.145108 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:17:21.146367 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:21.161520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:21.163783 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:17:21.164780 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:17:21.164819 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:17:21.164839 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:21.171157 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:17:21.173061 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:17:21.176047 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) Feb 13 19:17:21.178518 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:17:21.178544 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:21.178555 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:21.181405 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:21.184360 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:21.217849 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:17:21.222267 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:17:21.226524 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:17:21.230031 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:17:21.295953 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:21.307529 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:17:21.309826 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:17:21.315415 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:17:21.329994 ignition[918]: INFO : Ignition 2.20.0 Feb 13 19:17:21.329994 ignition[918]: INFO : Stage: mount Feb 13 19:17:21.331272 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:17:21.333959 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:21.333959 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:21.333959 ignition[918]: INFO : mount: mount passed Feb 13 19:17:21.333959 ignition[918]: INFO : Ignition finished successfully Feb 13 19:17:21.334331 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:17:21.344510 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:17:21.816350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:17:21.824582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:21.830396 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (931) Feb 13 19:17:21.832449 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:17:21.832478 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:21.832489 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:21.835406 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:21.836498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:21.853180 ignition[948]: INFO : Ignition 2.20.0 Feb 13 19:17:21.853180 ignition[948]: INFO : Stage: files Feb 13 19:17:21.854915 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:21.854915 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:21.854915 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:17:21.858571 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:17:21.858571 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:17:21.858571 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:17:21.858571 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:17:21.858571 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:17:21.857901 unknown[948]: wrote ssh authorized keys file for user: core Feb 13 19:17:21.866001 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:17:21.866001 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:17:22.004482 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:17:22.717376 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:17:22.719335 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:17:22.719335 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:17:22.755498 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 19:17:22.996347 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:17:23.048232 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:17:23.050192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:17:23.283540 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:17:23.507958 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:17:23.507958 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:17:23.512264 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:23.534642 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:23.538960 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:23.540565 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:23.540565 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:17:23.540565 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:17:23.540565 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:23.540565 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:23.540565 ignition[948]: INFO : files: files passed Feb 13 19:17:23.540565 ignition[948]: INFO : Ignition finished successfully Feb 13 19:17:23.540639 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:17:23.553525 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:17:23.556751 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:17:23.558477 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:17:23.559439 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:17:23.564222 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:17:23.567099 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:23.567099 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:23.570144 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:23.569360 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:23.571712 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:17:23.589611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:17:23.612042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:17:23.612161 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:17:23.614335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:17:23.616167 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:17:23.617941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:17:23.618691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:17:23.634152 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:23.643575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:17:23.650840 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:23.652188 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:23.654239 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:17:23.656394 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:17:23.656524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:23.659097 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:17:23.661205 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:17:23.662848 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:17:23.664529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:23.667015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:23.670590 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:17:23.675750 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:23.677662 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:17:23.680703 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:17:23.682927 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:17:23.685347 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:17:23.685497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:23.690570 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:23.692736 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:23.693920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:17:23.693999 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:23.696014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:17:23.696133 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:23.698915 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:17:23.699028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:23.700969 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:17:23.702519 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:17:23.707418 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:23.708672 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:17:23.710866 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:17:23.712393 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:17:23.712478 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:23.714053 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:17:23.714132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:23.715666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:17:23.715768 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:23.717559 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:17:23.717653 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:17:23.726555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:17:23.728083 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:17:23.729046 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:17:23.729164 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:23.731077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:17:23.731174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:23.737097 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:17:23.738158 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:17:23.741753 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 19:17:23.741753 ignition[1002]: INFO : Stage: umount Feb 13 19:17:23.744426 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:23.744426 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:23.744426 ignition[1002]: INFO : umount: umount passed Feb 13 19:17:23.744426 ignition[1002]: INFO : Ignition finished successfully Feb 13 19:17:23.744742 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:17:23.744835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:17:23.748160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:17:23.748533 systemd[1]: Stopped target network.target - Network. Feb 13 19:17:23.750372 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:17:23.750440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:17:23.752180 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:17:23.752224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:17:23.754408 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:17:23.754461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:17:23.756204 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:17:23.756250 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:23.758990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:17:23.761026 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:17:23.762893 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:17:23.762973 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:17:23.764816 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:17:23.764899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:23.768490 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 19:17:23.769022 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:17:23.769160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:17:23.772509 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:17:23.772647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:17:23.775105 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:17:23.775167 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:23.784499 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:17:23.785699 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:17:23.785768 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:23.787672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:17:23.787719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:23.789564 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:17:23.789611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:23.791509 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:17:23.791552 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:23.793646 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:23.804631 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:17:23.804750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:17:23.812067 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:17:23.812214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:23.814779 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:17:23.814822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:23.816668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:17:23.816699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:23.818414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:17:23.818462 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:23.821420 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:17:23.821467 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:23.824085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:23.824130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:23.838576 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:17:23.839644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:17:23.839707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:23.841860 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:17:23.841905 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:23.844119 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:17:23.844162 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:23.846523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:23.846568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:23.851129 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:17:23.851215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:17:23.852950 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:17:23.856158 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:17:23.865897 systemd[1]: Switching root. Feb 13 19:17:23.909537 systemd-journald[238]: Journal stopped Feb 13 19:17:24.667982 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:17:24.668046 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:17:24.668059 kernel: SELinux: policy capability open_perms=1 Feb 13 19:17:24.668069 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:17:24.668078 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:17:24.668087 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:17:24.668096 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:17:24.668114 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:17:24.668126 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:17:24.668136 kernel: audit: type=1403 audit(1739474244.071:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:17:24.668148 systemd[1]: Successfully loaded SELinux policy in 45.678ms. Feb 13 19:17:24.668160 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.131ms. Feb 13 19:17:24.668172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:17:24.668182 systemd[1]: Detected virtualization kvm. Feb 13 19:17:24.668192 systemd[1]: Detected architecture arm64. Feb 13 19:17:24.668202 systemd[1]: Detected first boot. Feb 13 19:17:24.668212 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:17:24.668223 zram_generator::config[1047]: No configuration found. Feb 13 19:17:24.668235 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:17:24.668245 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:17:24.668255 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:17:24.668278 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:24.668289 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:17:24.668300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:17:24.668310 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:17:24.668320 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:17:24.668330 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:17:24.668342 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:17:24.668352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:17:24.668362 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:17:24.668372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:24.668403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:24.668415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:17:24.668425 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:17:24.668436 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:17:24.668448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:24.668458 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:17:24.668468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:24.668478 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:17:24.668488 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:17:24.668499 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:24.668509 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:17:24.668519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:24.668531 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:24.668542 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:24.668552 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:24.668562 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:17:24.668572 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:17:24.668584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:24.668594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:24.668623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:24.668633 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:17:24.668645 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:17:24.668655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:17:24.668665 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:17:24.668675 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:17:24.668685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:17:24.668696 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:17:24.668706 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:17:24.668716 systemd[1]: Reached target machines.target - Containers. Feb 13 19:17:24.668726 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:17:24.668738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:24.668748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:24.668759 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:17:24.668769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:24.668779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:24.668789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:24.668803 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:17:24.668813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:24.668826 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:17:24.668836 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:17:24.668847 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:17:24.668856 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:17:24.668866 kernel: fuse: init (API version 7.39) Feb 13 19:17:24.668876 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:17:24.668886 kernel: loop: module loaded Feb 13 19:17:24.668895 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:24.668905 kernel: ACPI: bus type drm_connector registered Feb 13 19:17:24.668916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:24.668927 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:17:24.668937 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:17:24.668947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:24.668957 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:17:24.668967 systemd[1]: Stopped verity-setup.service. Feb 13 19:17:24.668977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:17:24.668986 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:17:24.669011 systemd-journald[1114]: Collecting audit messages is disabled. Feb 13 19:17:24.669038 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:17:24.669048 systemd-journald[1114]: Journal started Feb 13 19:17:24.669069 systemd-journald[1114]: Runtime Journal (/run/log/journal/c170e984d678410a9b34fab3c6afcbe9) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:17:24.447086 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:17:24.465855 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:17:24.467988 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:17:24.671715 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:24.672297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:17:24.673525 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:17:24.674715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:17:24.677435 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:17:24.678805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:24.680296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:17:24.680456 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:17:24.681848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:24.681980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:24.683345 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:24.683535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:24.684931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:24.685062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:24.686614 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:17:24.686744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:17:24.688214 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:24.688366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:24.689672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:24.691015 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:17:24.692665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:17:24.705000 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:17:24.713544 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:17:24.715621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:17:24.716725 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:17:24.716764 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:24.718707 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:17:24.720977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:17:24.723204 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:17:24.724396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:24.726031 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:17:24.728595 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:17:24.729805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:24.730785 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:17:24.731962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:24.735576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:24.740853 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:17:24.745559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:17:24.746788 systemd-journald[1114]: Time spent on flushing to /var/log/journal/c170e984d678410a9b34fab3c6afcbe9 is 25.902ms for 861 entries. Feb 13 19:17:24.746788 systemd-journald[1114]: System Journal (/var/log/journal/c170e984d678410a9b34fab3c6afcbe9) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:17:24.784354 systemd-journald[1114]: Received client request to flush runtime journal. Feb 13 19:17:24.784488 kernel: loop0: detected capacity change from 0 to 189592 Feb 13 19:17:24.784531 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:17:24.748293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:24.749844 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:17:24.752216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:17:24.754184 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:17:24.756467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:17:24.761639 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:17:24.769611 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:17:24.774390 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:17:24.776822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:24.789515 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:17:24.793121 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 19:17:24.793136 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 19:17:24.794739 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:17:24.797492 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:17:24.799296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:24.810554 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 19:17:24.810668 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:17:24.812507 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:17:24.846191 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:17:24.855788 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 19:17:24.854562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:24.868199 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 19:17:24.868221 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 19:17:24.873497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:24.893406 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 19:17:24.899438 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 19:17:24.903501 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 19:17:24.907058 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:17:24.907509 (sd-merge)[1185]: Merged extensions into '/usr'. Feb 13 19:17:24.913177 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:17:24.913193 systemd[1]: Reloading... Feb 13 19:17:24.968430 zram_generator::config[1211]: No configuration found. Feb 13 19:17:25.044475 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:17:25.062230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:25.097931 systemd[1]: Reloading finished in 184 ms. Feb 13 19:17:25.131855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:17:25.133538 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:17:25.152615 systemd[1]: Starting ensure-sysext.service... Feb 13 19:17:25.154624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:25.164730 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:17:25.164752 systemd[1]: Reloading... Feb 13 19:17:25.179516 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:17:25.179765 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:17:25.180424 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:17:25.180627 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 19:17:25.180669 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 19:17:25.191672 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:25.191685 systemd-tmpfiles[1248]: Skipping /boot Feb 13 19:17:25.199296 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:25.199311 systemd-tmpfiles[1248]: Skipping /boot Feb 13 19:17:25.203420 zram_generator::config[1271]: No configuration found. Feb 13 19:17:25.293469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:25.328716 systemd[1]: Reloading finished in 163 ms. Feb 13 19:17:25.345278 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:17:25.358899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:25.366090 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:25.368533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:17:25.370867 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:17:25.376706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:25.380707 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:25.385599 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:17:25.389502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:25.391746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:25.393975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:25.397716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:25.398863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:25.401594 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:17:25.403632 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:17:25.405514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:25.405640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:25.407177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:25.407296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:25.409092 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:25.409260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:25.416013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:25.419783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:25.423662 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Feb 13 19:17:25.426404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:25.429671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:25.430875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:25.433841 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:17:25.436338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:25.436523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:25.439437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:25.439620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:25.444592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:17:25.446591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:25.449027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:17:25.450849 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:25.450987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:25.461287 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:17:25.467160 systemd[1]: Finished ensure-sysext.service. Feb 13 19:17:25.478609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:25.494662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:25.496416 augenrules[1379]: No rules Feb 13 19:17:25.499539 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:25.501733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:25.510597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:25.511923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:25.514952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:25.518120 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:17:25.519814 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:17:25.520432 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:25.520634 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:25.523426 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:17:25.525624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:25.525783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:25.527835 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:25.527963 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:25.529498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:25.529622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:25.531136 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:25.531297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:25.532423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1355) Feb 13 19:17:25.541799 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:17:25.545937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:25.546016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:25.572184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:17:25.577449 systemd-resolved[1315]: Positive Trust Anchors: Feb 13 19:17:25.577523 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:25.577555 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:25.582563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:17:25.587686 systemd-resolved[1315]: Defaulting to hostname 'linux'. Feb 13 19:17:25.599596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:25.600840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:25.603548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:17:25.604839 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:17:25.606193 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:17:25.611685 systemd-networkd[1392]: lo: Link UP Feb 13 19:17:25.611693 systemd-networkd[1392]: lo: Gained carrier Feb 13 19:17:25.614302 systemd-networkd[1392]: Enumeration completed Feb 13 19:17:25.614497 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:25.615107 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:25.615118 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:17:25.615749 systemd-networkd[1392]: eth0: Link UP Feb 13 19:17:25.615757 systemd-networkd[1392]: eth0: Gained carrier Feb 13 19:17:25.615772 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:25.618171 systemd[1]: Reached target network.target - Network. Feb 13 19:17:25.626576 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:17:25.630500 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:17:25.635286 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Feb 13 19:17:25.635882 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:17:25.635934 systemd-timesyncd[1393]: Initial clock synchronization to Thu 2025-02-13 19:17:25.660067 UTC. Feb 13 19:17:25.637547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:25.645658 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:17:25.648313 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:17:25.665054 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:25.677474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:25.711003 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:17:25.712626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:25.713837 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:25.715068 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:17:25.716397 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:17:25.717767 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:17:25.718897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:17:25.720120 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:17:25.721529 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:17:25.721563 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:25.722507 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:25.724223 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:17:25.726642 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:17:25.741364 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:17:25.743494 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:17:25.745029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:17:25.746204 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:25.747205 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:25.748242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:25.748282 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:25.749136 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:17:25.752219 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:25.751262 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:17:25.753955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:17:25.757566 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:17:25.759565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:17:25.763570 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:17:25.767269 jq[1423]: false Feb 13 19:17:25.767682 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:17:25.769922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:17:25.773730 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:17:25.777949 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:17:25.779693 extend-filesystems[1424]: Found loop3 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found loop4 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found loop5 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda1 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda2 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda3 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found usr Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda4 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda6 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda7 Feb 13 19:17:25.781117 extend-filesystems[1424]: Found vda9 Feb 13 19:17:25.781117 extend-filesystems[1424]: Checking size of /dev/vda9 Feb 13 19:17:25.794315 extend-filesystems[1424]: Resized partition /dev/vda9 Feb 13 19:17:25.800222 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:17:25.785941 dbus-daemon[1422]: [system] SELinux support is enabled Feb 13 19:17:25.783786 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:17:25.800970 extend-filesystems[1441]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:17:25.784186 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:17:25.795627 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:17:25.802244 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:17:25.807757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:17:25.813405 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:17:25.816870 jq[1443]: true Feb 13 19:17:25.816710 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:17:25.817163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:17:25.817495 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:17:25.817645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:17:25.820845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:17:25.820991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:17:25.838187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:17:25.838360 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:17:25.840879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:17:25.840898 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:17:25.841704 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:17:25.860776 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:17:25.860820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Feb 13 19:17:25.871763 update_engine[1437]: I20250213 19:17:25.857914 1437 main.cc:92] Flatcar Update Engine starting Feb 13 19:17:25.871763 update_engine[1437]: I20250213 19:17:25.867542 1437 update_check_scheduler.cc:74] Next update check in 9m31s Feb 13 19:17:25.872003 jq[1449]: true Feb 13 19:17:25.866038 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:17:25.872106 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:17:25.872106 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:17:25.872106 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:17:25.868456 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:17:25.884988 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Feb 13 19:17:25.871238 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:17:25.891452 tar[1448]: linux-arm64/helm Feb 13 19:17:25.871609 systemd-logind[1435]: New seat seat0. Feb 13 19:17:25.875932 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:17:25.881194 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:17:25.884142 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:17:25.921511 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:17:25.922465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:17:25.927150 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:17:25.934736 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:17:26.042154 containerd[1457]: time="2025-02-13T19:17:26.041457655Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:17:26.070360 containerd[1457]: time="2025-02-13T19:17:26.070316130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.071890 containerd[1457]: time="2025-02-13T19:17:26.071825674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:26.071890 containerd[1457]: time="2025-02-13T19:17:26.071866145Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:17:26.071890 containerd[1457]: time="2025-02-13T19:17:26.071886240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:17:26.072055 containerd[1457]: time="2025-02-13T19:17:26.072036314Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:17:26.072079 containerd[1457]: time="2025-02-13T19:17:26.072057730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072128 containerd[1457]: time="2025-02-13T19:17:26.072112492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072151 containerd[1457]: time="2025-02-13T19:17:26.072128143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072311 containerd[1457]: time="2025-02-13T19:17:26.072275695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072311 containerd[1457]: time="2025-02-13T19:17:26.072297192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072359 containerd[1457]: time="2025-02-13T19:17:26.072319368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072359 containerd[1457]: time="2025-02-13T19:17:26.072328495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072438 containerd[1457]: time="2025-02-13T19:17:26.072422326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072628 containerd[1457]: time="2025-02-13T19:17:26.072609708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072722 containerd[1457]: time="2025-02-13T19:17:26.072707182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:26.072742 containerd[1457]: time="2025-02-13T19:17:26.072724516Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:17:26.072807 containerd[1457]: time="2025-02-13T19:17:26.072793888Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:17:26.072857 containerd[1457]: time="2025-02-13T19:17:26.072845647Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:17:26.076606 containerd[1457]: time="2025-02-13T19:17:26.076570751Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:17:26.076649 containerd[1457]: time="2025-02-13T19:17:26.076622030Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:17:26.076649 containerd[1457]: time="2025-02-13T19:17:26.076638242Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:17:26.076683 containerd[1457]: time="2025-02-13T19:17:26.076652333Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:17:26.076683 containerd[1457]: time="2025-02-13T19:17:26.076673029Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:17:26.076950 containerd[1457]: time="2025-02-13T19:17:26.076912050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:17:26.077172 containerd[1457]: time="2025-02-13T19:17:26.077148149Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:17:26.077279 containerd[1457]: time="2025-02-13T19:17:26.077262035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:17:26.077299 containerd[1457]: time="2025-02-13T19:17:26.077284332Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:17:26.077324 containerd[1457]: time="2025-02-13T19:17:26.077300024Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:17:26.077324 containerd[1457]: time="2025-02-13T19:17:26.077320679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077365 containerd[1457]: time="2025-02-13T19:17:26.077332689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077365 containerd[1457]: time="2025-02-13T19:17:26.077343897Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077365 containerd[1457]: time="2025-02-13T19:17:26.077356707Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077444 containerd[1457]: time="2025-02-13T19:17:26.077370317Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077444 containerd[1457]: time="2025-02-13T19:17:26.077405864Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077444 containerd[1457]: time="2025-02-13T19:17:26.077419514Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077444 containerd[1457]: time="2025-02-13T19:17:26.077430763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:17:26.077507 containerd[1457]: time="2025-02-13T19:17:26.077449737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077507 containerd[1457]: time="2025-02-13T19:17:26.077462907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077507 containerd[1457]: time="2025-02-13T19:17:26.077473795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077507 containerd[1457]: time="2025-02-13T19:17:26.077484644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077507 containerd[1457]: time="2025-02-13T19:17:26.077496012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077508302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077519470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077531920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077545530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077559060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077574072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077587642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077598 containerd[1457]: time="2025-02-13T19:17:26.077598730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077726 containerd[1457]: time="2025-02-13T19:17:26.077612741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:17:26.077726 containerd[1457]: time="2025-02-13T19:17:26.077632075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077726 containerd[1457]: time="2025-02-13T19:17:26.077644765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077726 containerd[1457]: time="2025-02-13T19:17:26.077654532Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:17:26.077829 containerd[1457]: time="2025-02-13T19:17:26.077813613Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:17:26.077853 containerd[1457]: time="2025-02-13T19:17:26.077835029Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:17:26.077853 containerd[1457]: time="2025-02-13T19:17:26.077845157Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:17:26.077889 containerd[1457]: time="2025-02-13T19:17:26.077855925Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:17:26.077889 containerd[1457]: time="2025-02-13T19:17:26.077865012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.077889 containerd[1457]: time="2025-02-13T19:17:26.077876461Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:17:26.077889 containerd[1457]: time="2025-02-13T19:17:26.077886228Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:17:26.077959 containerd[1457]: time="2025-02-13T19:17:26.077896356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:17:26.078286 containerd[1457]: time="2025-02-13T19:17:26.078234012Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:17:26.078286 containerd[1457]: time="2025-02-13T19:17:26.078284930Z" level=info msg="Connect containerd service" Feb 13 19:17:26.078421 containerd[1457]: time="2025-02-13T19:17:26.078328203Z" level=info msg="using legacy CRI server" Feb 13 19:17:26.078421 containerd[1457]: time="2025-02-13T19:17:26.078336770Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:17:26.078586 containerd[1457]: time="2025-02-13T19:17:26.078567184Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:17:26.079260 containerd[1457]: time="2025-02-13T19:17:26.079233089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:17:26.079484 containerd[1457]: time="2025-02-13T19:17:26.079451935Z" level=info msg="Start subscribing containerd event" Feb 13 19:17:26.079510 containerd[1457]: time="2025-02-13T19:17:26.079501733Z" level=info msg="Start recovering state" Feb 13 19:17:26.079571 containerd[1457]: time="2025-02-13T19:17:26.079559016Z" level=info msg="Start event monitor" Feb 13 19:17:26.079591 containerd[1457]: time="2025-02-13T19:17:26.079573787Z" level=info msg="Start snapshots syncer" Feb 13 19:17:26.079591 containerd[1457]: time="2025-02-13T19:17:26.079582394Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:17:26.079591 containerd[1457]: time="2025-02-13T19:17:26.079589119Z" level=info msg="Start streaming server" Feb 13 19:17:26.082149 containerd[1457]: time="2025-02-13T19:17:26.082116315Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:17:26.082202 containerd[1457]: time="2025-02-13T19:17:26.082182885Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:17:26.082321 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:17:26.083733 containerd[1457]: time="2025-02-13T19:17:26.083703558Z" level=info msg="containerd successfully booted in 0.044651s" Feb 13 19:17:26.222429 tar[1448]: linux-arm64/LICENSE Feb 13 19:17:26.222429 tar[1448]: linux-arm64/README.md Feb 13 19:17:26.236186 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:17:26.876656 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:17:26.895041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:17:26.907595 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:17:26.912623 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:17:26.912836 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:17:26.915283 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:17:26.926437 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:17:26.929377 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:17:26.931364 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:17:26.932650 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:17:27.619576 systemd-networkd[1392]: eth0: Gained IPv6LL Feb 13 19:17:27.625442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:17:27.627736 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:17:27.643622 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:17:27.645971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:27.648095 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:17:27.663549 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:17:27.663725 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:17:27.665658 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:17:27.670129 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:17:28.147425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:28.149034 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:17:28.151204 (kubelet)[1534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:17:28.154129 systemd[1]: Startup finished in 549ms (kernel) + 5.355s (initrd) + 4.131s (userspace) = 10.037s. Feb 13 19:17:28.586843 kubelet[1534]: E0213 19:17:28.586700 1534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:17:28.588955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:17:28.589099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:17:31.412517 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:17:31.413626 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:51808.service - OpenSSH per-connection server daemon (10.0.0.1:51808). Feb 13 19:17:31.492364 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 51808 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:31.494137 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:31.501279 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:17:31.515622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:17:31.520211 systemd-logind[1435]: New session 1 of user core. Feb 13 19:17:31.529452 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:17:31.532697 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:17:31.539439 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:17:31.610067 systemd[1551]: Queued start job for default target default.target. Feb 13 19:17:31.620277 systemd[1551]: Created slice app.slice - User Application Slice. Feb 13 19:17:31.620320 systemd[1551]: Reached target paths.target - Paths. Feb 13 19:17:31.620332 systemd[1551]: Reached target timers.target - Timers. Feb 13 19:17:31.621594 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:17:31.631409 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:17:31.631463 systemd[1551]: Reached target sockets.target - Sockets. Feb 13 19:17:31.631474 systemd[1551]: Reached target basic.target - Basic System. Feb 13 19:17:31.631508 systemd[1551]: Reached target default.target - Main User Target. Feb 13 19:17:31.631533 systemd[1551]: Startup finished in 86ms. Feb 13 19:17:31.631883 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:17:31.633138 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:17:31.698251 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:51810.service - OpenSSH per-connection server daemon (10.0.0.1:51810). Feb 13 19:17:31.735042 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 51810 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:31.736438 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:31.741605 systemd-logind[1435]: New session 2 of user core. Feb 13 19:17:31.750549 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:17:31.834987 sshd[1564]: Connection closed by 10.0.0.1 port 51810 Feb 13 19:17:31.835782 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:31.848154 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:51810.service: Deactivated successfully. Feb 13 19:17:31.849760 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:17:31.851666 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:17:31.853797 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:51826.service - OpenSSH per-connection server daemon (10.0.0.1:51826). Feb 13 19:17:31.855445 systemd-logind[1435]: Removed session 2. Feb 13 19:17:31.895479 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51826 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:31.896707 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:31.901444 systemd-logind[1435]: New session 3 of user core. Feb 13 19:17:31.909534 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:17:31.958909 sshd[1571]: Connection closed by 10.0.0.1 port 51826 Feb 13 19:17:31.958310 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:31.968632 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:51826.service: Deactivated successfully. Feb 13 19:17:31.969905 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:17:31.973693 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:17:31.981625 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:51834.service - OpenSSH per-connection server daemon (10.0.0.1:51834). Feb 13 19:17:31.983791 systemd-logind[1435]: Removed session 3. Feb 13 19:17:32.015663 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 51834 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:32.016753 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:32.022377 systemd-logind[1435]: New session 4 of user core. Feb 13 19:17:32.027543 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:17:32.078411 sshd[1578]: Connection closed by 10.0.0.1 port 51834 Feb 13 19:17:32.078793 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:32.088489 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:51834.service: Deactivated successfully. Feb 13 19:17:32.090999 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:17:32.092095 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:17:32.093637 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:51836.service - OpenSSH per-connection server daemon (10.0.0.1:51836). Feb 13 19:17:32.094419 systemd-logind[1435]: Removed session 4. Feb 13 19:17:32.140660 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 51836 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:32.142002 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:32.145737 systemd-logind[1435]: New session 5 of user core. Feb 13 19:17:32.153517 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:17:32.210931 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:17:32.213757 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:32.228406 sudo[1586]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:32.229940 sshd[1585]: Connection closed by 10.0.0.1 port 51836 Feb 13 19:17:32.230329 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:32.240923 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:51836.service: Deactivated successfully. Feb 13 19:17:32.243784 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:17:32.245454 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:17:32.257635 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Feb 13 19:17:32.258608 systemd-logind[1435]: Removed session 5. Feb 13 19:17:32.293418 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:32.294537 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:32.297978 systemd-logind[1435]: New session 6 of user core. Feb 13 19:17:32.310518 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:17:32.360987 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:17:32.361254 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:32.364087 sudo[1595]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:32.368268 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:17:32.368770 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:32.385640 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:32.407238 augenrules[1617]: No rules Feb 13 19:17:32.407829 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:32.408019 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:32.409045 sudo[1594]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:32.410290 sshd[1593]: Connection closed by 10.0.0.1 port 51848 Feb 13 19:17:32.410650 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:32.423568 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:51848.service: Deactivated successfully. Feb 13 19:17:32.424750 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:17:32.426942 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:17:32.436684 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Feb 13 19:17:32.437614 systemd-logind[1435]: Removed session 6. Feb 13 19:17:32.474715 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:17:32.475036 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:32.479237 systemd-logind[1435]: New session 7 of user core. Feb 13 19:17:32.485513 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:17:32.535925 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:17:32.536200 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:32.850710 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:17:32.851122 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:17:33.094258 dockerd[1648]: time="2025-02-13T19:17:33.093824283Z" level=info msg="Starting up" Feb 13 19:17:33.241511 dockerd[1648]: time="2025-02-13T19:17:33.241235608Z" level=info msg="Loading containers: start." Feb 13 19:17:33.365491 kernel: Initializing XFRM netlink socket Feb 13 19:17:33.430873 systemd-networkd[1392]: docker0: Link UP Feb 13 19:17:33.462749 dockerd[1648]: time="2025-02-13T19:17:33.462684037Z" level=info msg="Loading containers: done." Feb 13 19:17:33.475232 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3752969491-merged.mount: Deactivated successfully. Feb 13 19:17:33.476578 dockerd[1648]: time="2025-02-13T19:17:33.476537287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:17:33.476648 dockerd[1648]: time="2025-02-13T19:17:33.476631424Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:17:33.476756 dockerd[1648]: time="2025-02-13T19:17:33.476731925Z" level=info msg="Daemon has completed initialization" Feb 13 19:17:33.504721 dockerd[1648]: time="2025-02-13T19:17:33.504602844Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:17:33.504803 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:17:34.075991 containerd[1457]: time="2025-02-13T19:17:34.075690599Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:17:34.747109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881752529.mount: Deactivated successfully. Feb 13 19:17:35.536771 containerd[1457]: time="2025-02-13T19:17:35.536557225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:35.537688 containerd[1457]: time="2025-02-13T19:17:35.537615268Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 19:17:35.538308 containerd[1457]: time="2025-02-13T19:17:35.538248469Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:35.541283 containerd[1457]: time="2025-02-13T19:17:35.541229927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:35.542465 containerd[1457]: time="2025-02-13T19:17:35.542433253Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.466707633s" Feb 13 19:17:35.542530 containerd[1457]: time="2025-02-13T19:17:35.542481480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:17:35.543275 containerd[1457]: time="2025-02-13T19:17:35.543058729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:17:36.520640 containerd[1457]: time="2025-02-13T19:17:36.520583280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:36.521408 containerd[1457]: time="2025-02-13T19:17:36.521314924Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 19:17:36.522043 containerd[1457]: time="2025-02-13T19:17:36.522002463Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:36.525348 containerd[1457]: time="2025-02-13T19:17:36.525316572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:36.528498 containerd[1457]: time="2025-02-13T19:17:36.528228499Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 985.139913ms" Feb 13 19:17:36.528498 containerd[1457]: time="2025-02-13T19:17:36.528274285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:17:36.529146 containerd[1457]: time="2025-02-13T19:17:36.529120152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:17:37.658146 containerd[1457]: time="2025-02-13T19:17:37.658100088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:37.659066 containerd[1457]: time="2025-02-13T19:17:37.659026303Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 19:17:37.659783 containerd[1457]: time="2025-02-13T19:17:37.659753172Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:37.663399 containerd[1457]: time="2025-02-13T19:17:37.663349415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:37.664673 containerd[1457]: time="2025-02-13T19:17:37.664419347Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.135268738s" Feb 13 19:17:37.664673 containerd[1457]: time="2025-02-13T19:17:37.664456127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:17:37.665090 containerd[1457]: time="2025-02-13T19:17:37.665050524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:17:38.722364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797514785.mount: Deactivated successfully. Feb 13 19:17:38.723264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:38.734685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:38.827423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:38.831100 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:17:38.867888 kubelet[1921]: E0213 19:17:38.867786 1921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:17:38.870541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:17:38.870764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:17:39.191550 containerd[1457]: time="2025-02-13T19:17:39.191424867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:39.193579 containerd[1457]: time="2025-02-13T19:17:39.193514996Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:17:39.194172 containerd[1457]: time="2025-02-13T19:17:39.194147354Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:39.196577 containerd[1457]: time="2025-02-13T19:17:39.196538113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:39.197665 containerd[1457]: time="2025-02-13T19:17:39.197636905Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.532551402s" Feb 13 19:17:39.197704 containerd[1457]: time="2025-02-13T19:17:39.197666399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:17:39.198359 containerd[1457]: time="2025-02-13T19:17:39.198174694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:17:39.761999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889042464.mount: Deactivated successfully. Feb 13 19:17:40.304455 containerd[1457]: time="2025-02-13T19:17:40.304405899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.304848 containerd[1457]: time="2025-02-13T19:17:40.304773197Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:17:40.306316 containerd[1457]: time="2025-02-13T19:17:40.306261841Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.308827 containerd[1457]: time="2025-02-13T19:17:40.308790030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.310029 containerd[1457]: time="2025-02-13T19:17:40.309980689Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.111775259s" Feb 13 19:17:40.310076 containerd[1457]: time="2025-02-13T19:17:40.310027592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:17:40.310654 containerd[1457]: time="2025-02-13T19:17:40.310630165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:17:40.727046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878481576.mount: Deactivated successfully. Feb 13 19:17:40.731693 containerd[1457]: time="2025-02-13T19:17:40.731645234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.732460 containerd[1457]: time="2025-02-13T19:17:40.732286265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:17:40.733511 containerd[1457]: time="2025-02-13T19:17:40.733476044Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.735545 containerd[1457]: time="2025-02-13T19:17:40.735493224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:40.736505 containerd[1457]: time="2025-02-13T19:17:40.736469099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.653204ms" Feb 13 19:17:40.736567 containerd[1457]: time="2025-02-13T19:17:40.736506037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:17:40.737024 containerd[1457]: time="2025-02-13T19:17:40.736967781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:17:41.303132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025036463.mount: Deactivated successfully. Feb 13 19:17:42.610273 containerd[1457]: time="2025-02-13T19:17:42.610065648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:42.611121 containerd[1457]: time="2025-02-13T19:17:42.610837200Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 19:17:42.611924 containerd[1457]: time="2025-02-13T19:17:42.611890841Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:42.615097 containerd[1457]: time="2025-02-13T19:17:42.615063208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:42.616439 containerd[1457]: time="2025-02-13T19:17:42.616403139Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.879403863s" Feb 13 19:17:42.616484 containerd[1457]: time="2025-02-13T19:17:42.616437835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:17:48.618411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:48.628603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:48.647490 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-7.scope)... Feb 13 19:17:48.647514 systemd[1]: Reloading... Feb 13 19:17:48.710421 zram_generator::config[2101]: No configuration found. Feb 13 19:17:48.842425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:48.894767 systemd[1]: Reloading finished in 246 ms. Feb 13 19:17:48.937274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:48.939604 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:17:48.939790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:48.941351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:49.034410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:49.041039 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:17:49.094923 kubelet[2148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:49.094923 kubelet[2148]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:17:49.094923 kubelet[2148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:49.095288 kubelet[2148]: I0213 19:17:49.095127 2148 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:17:49.547735 kubelet[2148]: I0213 19:17:49.546950 2148 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:17:49.547735 kubelet[2148]: I0213 19:17:49.546985 2148 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:17:49.547735 kubelet[2148]: I0213 19:17:49.547227 2148 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:17:49.579858 kubelet[2148]: E0213 19:17:49.579810 2148 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:49.582419 kubelet[2148]: I0213 19:17:49.582394 2148 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:49.589673 kubelet[2148]: E0213 19:17:49.589638 2148 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:17:49.589673 kubelet[2148]: I0213 19:17:49.589674 2148 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:17:49.593375 kubelet[2148]: I0213 19:17:49.593342 2148 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:17:49.594448 kubelet[2148]: I0213 19:17:49.594414 2148 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:17:49.594601 kubelet[2148]: I0213 19:17:49.594562 2148 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:17:49.594769 kubelet[2148]: I0213 19:17:49.594589 2148 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:17:49.595017 kubelet[2148]: I0213 19:17:49.594994 2148 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:17:49.595017 kubelet[2148]: I0213 19:17:49.595008 2148 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:17:49.595264 kubelet[2148]: I0213 19:17:49.595243 2148 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:49.596710 kubelet[2148]: I0213 19:17:49.596668 2148 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:17:49.596710 kubelet[2148]: I0213 19:17:49.596691 2148 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:17:49.596832 kubelet[2148]: I0213 19:17:49.596807 2148 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:17:49.596832 kubelet[2148]: I0213 19:17:49.596820 2148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:17:49.597993 kubelet[2148]: W0213 19:17:49.597871 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:49.597993 kubelet[2148]: E0213 19:17:49.597936 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:49.598766 kubelet[2148]: I0213 19:17:49.598534 2148 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:17:49.598766 kubelet[2148]: W0213 19:17:49.598687 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:49.598766 kubelet[2148]: E0213 19:17:49.598729 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:49.601417 kubelet[2148]: I0213 19:17:49.601399 2148 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:17:49.605113 kubelet[2148]: W0213 19:17:49.605089 2148 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:17:49.605969 kubelet[2148]: I0213 19:17:49.605943 2148 server.go:1269] "Started kubelet" Feb 13 19:17:49.607844 kubelet[2148]: I0213 19:17:49.607792 2148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:17:49.608540 kubelet[2148]: I0213 19:17:49.608034 2148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:17:49.608540 kubelet[2148]: I0213 19:17:49.608206 2148 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:17:49.608540 kubelet[2148]: I0213 19:17:49.608461 2148 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:17:49.612980 kubelet[2148]: I0213 19:17:49.612939 2148 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:17:49.614585 kubelet[2148]: I0213 19:17:49.614343 2148 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:17:49.614643 kubelet[2148]: E0213 19:17:49.614630 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:49.615863 kubelet[2148]: I0213 19:17:49.615172 2148 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:17:49.615863 kubelet[2148]: I0213 19:17:49.615239 2148 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:17:49.615863 kubelet[2148]: W0213 19:17:49.615570 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:49.615863 kubelet[2148]: E0213 19:17:49.615622 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:49.616049 kubelet[2148]: E0213 19:17:49.616022 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Feb 13 19:17:49.618369 kubelet[2148]: E0213 19:17:49.617287 2148 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823daa1cfcae657 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:17:49.605914199 +0000 UTC m=+0.557569471,LastTimestamp:2025-02-13 19:17:49.605914199 +0000 UTC m=+0.557569471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:17:49.619088 kubelet[2148]: I0213 19:17:49.618832 2148 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:17:49.619303 kubelet[2148]: I0213 19:17:49.619254 2148 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:17:49.619761 kubelet[2148]: I0213 19:17:49.619738 2148 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:17:49.620661 kubelet[2148]: E0213 19:17:49.619748 2148 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:17:49.622088 kubelet[2148]: I0213 19:17:49.622062 2148 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:17:49.631415 kubelet[2148]: I0213 19:17:49.631329 2148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:17:49.632640 kubelet[2148]: I0213 19:17:49.632615 2148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:17:49.632640 kubelet[2148]: I0213 19:17:49.632644 2148 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:17:49.632749 kubelet[2148]: I0213 19:17:49.632671 2148 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:17:49.632749 kubelet[2148]: E0213 19:17:49.632718 2148 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:17:49.634044 kubelet[2148]: W0213 19:17:49.633767 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:49.634044 kubelet[2148]: E0213 19:17:49.633831 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:49.634713 kubelet[2148]: I0213 19:17:49.634680 2148 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:17:49.634713 kubelet[2148]: I0213 19:17:49.634705 2148 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:17:49.634799 kubelet[2148]: I0213 19:17:49.634724 2148 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:49.696252 kubelet[2148]: I0213 19:17:49.696207 2148 policy_none.go:49] "None policy: Start" Feb 13 19:17:49.697011 kubelet[2148]: I0213 19:17:49.696991 2148 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:17:49.697051 kubelet[2148]: I0213 19:17:49.697021 2148 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:17:49.703201 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:17:49.714812 kubelet[2148]: E0213 19:17:49.714783 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:49.716669 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:17:49.719476 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:17:49.731131 kubelet[2148]: I0213 19:17:49.731099 2148 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:17:49.731329 kubelet[2148]: I0213 19:17:49.731302 2148 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:17:49.731368 kubelet[2148]: I0213 19:17:49.731319 2148 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:17:49.731634 kubelet[2148]: I0213 19:17:49.731609 2148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:17:49.733811 kubelet[2148]: E0213 19:17:49.733776 2148 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:17:49.739784 systemd[1]: Created slice kubepods-burstable-pod4d434001f97402e3103d90d652bc69f0.slice - libcontainer container kubepods-burstable-pod4d434001f97402e3103d90d652bc69f0.slice. Feb 13 19:17:49.753053 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:17:49.758023 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:17:49.816733 kubelet[2148]: E0213 19:17:49.816629 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Feb 13 19:17:49.833914 kubelet[2148]: I0213 19:17:49.833867 2148 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:49.834621 kubelet[2148]: E0213 19:17:49.834584 2148 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Feb 13 19:17:49.917110 kubelet[2148]: I0213 19:17:49.917027 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:49.917167 kubelet[2148]: I0213 19:17:49.917122 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:49.917205 kubelet[2148]: I0213 19:17:49.917172 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:49.917271 kubelet[2148]: I0213 19:17:49.917257 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:17:49.917296 kubelet[2148]: I0213 19:17:49.917279 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:49.917316 kubelet[2148]: I0213 19:17:49.917296 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:49.917340 kubelet[2148]: I0213 19:17:49.917314 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:49.917370 kubelet[2148]: I0213 19:17:49.917356 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:49.917430 kubelet[2148]: I0213 19:17:49.917372 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:50.036742 kubelet[2148]: I0213 19:17:50.036684 2148 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:50.037071 kubelet[2148]: E0213 19:17:50.037037 2148 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Feb 13 19:17:50.052719 kubelet[2148]: E0213 19:17:50.052683 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.053446 containerd[1457]: time="2025-02-13T19:17:50.053410078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d434001f97402e3103d90d652bc69f0,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:50.056613 kubelet[2148]: E0213 19:17:50.056583 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.057106 containerd[1457]: time="2025-02-13T19:17:50.057040283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:50.060763 kubelet[2148]: E0213 19:17:50.060667 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.061074 containerd[1457]: time="2025-02-13T19:17:50.061043660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:50.217461 kubelet[2148]: E0213 19:17:50.217308 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Feb 13 19:17:50.439084 kubelet[2148]: I0213 19:17:50.438837 2148 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:50.439320 kubelet[2148]: E0213 19:17:50.439295 2148 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Feb 13 19:17:50.456771 kubelet[2148]: W0213 19:17:50.456712 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:50.456956 kubelet[2148]: E0213 19:17:50.456931 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:50.512793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333716436.mount: Deactivated successfully. Feb 13 19:17:50.520517 containerd[1457]: time="2025-02-13T19:17:50.520456064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:50.521545 containerd[1457]: time="2025-02-13T19:17:50.521490670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:17:50.523662 containerd[1457]: time="2025-02-13T19:17:50.523617183Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:50.528620 containerd[1457]: time="2025-02-13T19:17:50.528576458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:50.529327 containerd[1457]: time="2025-02-13T19:17:50.529278586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:17:50.530073 containerd[1457]: time="2025-02-13T19:17:50.530030813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:50.531149 containerd[1457]: time="2025-02-13T19:17:50.531110675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:50.531228 containerd[1457]: time="2025-02-13T19:17:50.531193984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:17:50.532095 containerd[1457]: time="2025-02-13T19:17:50.532057610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.565703ms" Feb 13 19:17:50.535280 containerd[1457]: time="2025-02-13T19:17:50.535138541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 474.030739ms" Feb 13 19:17:50.537061 containerd[1457]: time="2025-02-13T19:17:50.537025889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.88357ms" Feb 13 19:17:50.643265 kubelet[2148]: W0213 19:17:50.643169 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:50.643265 kubelet[2148]: E0213 19:17:50.643216 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:50.681870 containerd[1457]: time="2025-02-13T19:17:50.681545160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:50.681870 containerd[1457]: time="2025-02-13T19:17:50.681628549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:50.681870 containerd[1457]: time="2025-02-13T19:17:50.681641314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.682162 containerd[1457]: time="2025-02-13T19:17:50.682034413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.682245 containerd[1457]: time="2025-02-13T19:17:50.682120323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:50.682245 containerd[1457]: time="2025-02-13T19:17:50.682195150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:50.682245 containerd[1457]: time="2025-02-13T19:17:50.682214036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.682350 containerd[1457]: time="2025-02-13T19:17:50.682299667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.682414 containerd[1457]: time="2025-02-13T19:17:50.682316993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:50.682414 containerd[1457]: time="2025-02-13T19:17:50.682370092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:50.682603 containerd[1457]: time="2025-02-13T19:17:50.682560359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.682701 containerd[1457]: time="2025-02-13T19:17:50.682662795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:50.707602 systemd[1]: Started cri-containerd-617ec4a221d3598e3a9cf78d7bf8fd051da33cec96d2a173253fa9d30c507a64.scope - libcontainer container 617ec4a221d3598e3a9cf78d7bf8fd051da33cec96d2a173253fa9d30c507a64. Feb 13 19:17:50.708749 systemd[1]: Started cri-containerd-bf9f2a33f4d50e2f0e905a83789c0b02c20c3dcd401c39bd0a4c4c5251d5c62e.scope - libcontainer container bf9f2a33f4d50e2f0e905a83789c0b02c20c3dcd401c39bd0a4c4c5251d5c62e. Feb 13 19:17:50.710820 systemd[1]: Started cri-containerd-e158d1fabf90fc467056b1856c809c7c5a2dac289a656870949a8e45a3e5ff14.scope - libcontainer container e158d1fabf90fc467056b1856c809c7c5a2dac289a656870949a8e45a3e5ff14. Feb 13 19:17:50.744182 containerd[1457]: time="2025-02-13T19:17:50.744120227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"617ec4a221d3598e3a9cf78d7bf8fd051da33cec96d2a173253fa9d30c507a64\"" Feb 13 19:17:50.745690 containerd[1457]: time="2025-02-13T19:17:50.745658012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d434001f97402e3103d90d652bc69f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf9f2a33f4d50e2f0e905a83789c0b02c20c3dcd401c39bd0a4c4c5251d5c62e\"" Feb 13 19:17:50.747351 kubelet[2148]: E0213 19:17:50.747287 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.747453 kubelet[2148]: E0213 19:17:50.747327 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.750205 containerd[1457]: time="2025-02-13T19:17:50.750170769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e158d1fabf90fc467056b1856c809c7c5a2dac289a656870949a8e45a3e5ff14\"" Feb 13 19:17:50.751050 containerd[1457]: time="2025-02-13T19:17:50.751019509Z" level=info msg="CreateContainer within sandbox \"bf9f2a33f4d50e2f0e905a83789c0b02c20c3dcd401c39bd0a4c4c5251d5c62e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:17:50.751435 containerd[1457]: time="2025-02-13T19:17:50.751260235Z" level=info msg="CreateContainer within sandbox \"617ec4a221d3598e3a9cf78d7bf8fd051da33cec96d2a173253fa9d30c507a64\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:17:50.752270 kubelet[2148]: E0213 19:17:50.752236 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:50.753999 containerd[1457]: time="2025-02-13T19:17:50.753967073Z" level=info msg="CreateContainer within sandbox \"e158d1fabf90fc467056b1856c809c7c5a2dac289a656870949a8e45a3e5ff14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:17:50.771128 containerd[1457]: time="2025-02-13T19:17:50.770992859Z" level=info msg="CreateContainer within sandbox \"617ec4a221d3598e3a9cf78d7bf8fd051da33cec96d2a173253fa9d30c507a64\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"caf74b0205da03d6746d52f2e564adc3e0beeec8da8eda09d5a9a38693744759\"" Feb 13 19:17:50.772480 containerd[1457]: time="2025-02-13T19:17:50.772449574Z" level=info msg="StartContainer for \"caf74b0205da03d6746d52f2e564adc3e0beeec8da8eda09d5a9a38693744759\"" Feb 13 19:17:50.774450 containerd[1457]: time="2025-02-13T19:17:50.774403386Z" level=info msg="CreateContainer within sandbox \"e158d1fabf90fc467056b1856c809c7c5a2dac289a656870949a8e45a3e5ff14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b479648d4234e590f5d02c5d1a7029c334bdf2a61784fd1a571f45282fed3b6a\"" Feb 13 19:17:50.774937 containerd[1457]: time="2025-02-13T19:17:50.774906764Z" level=info msg="StartContainer for \"b479648d4234e590f5d02c5d1a7029c334bdf2a61784fd1a571f45282fed3b6a\"" Feb 13 19:17:50.775748 containerd[1457]: time="2025-02-13T19:17:50.775674076Z" level=info msg="CreateContainer within sandbox \"bf9f2a33f4d50e2f0e905a83789c0b02c20c3dcd401c39bd0a4c4c5251d5c62e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"557ab75955aa20ae7833cbcfd7d8af763fd5070a6535b89f8c6baff93e28e131\"" Feb 13 19:17:50.777210 containerd[1457]: time="2025-02-13T19:17:50.776107589Z" level=info msg="StartContainer for \"557ab75955aa20ae7833cbcfd7d8af763fd5070a6535b89f8c6baff93e28e131\"" Feb 13 19:17:50.803525 systemd[1]: Started cri-containerd-557ab75955aa20ae7833cbcfd7d8af763fd5070a6535b89f8c6baff93e28e131.scope - libcontainer container 557ab75955aa20ae7833cbcfd7d8af763fd5070a6535b89f8c6baff93e28e131. Feb 13 19:17:50.804831 systemd[1]: Started cri-containerd-b479648d4234e590f5d02c5d1a7029c334bdf2a61784fd1a571f45282fed3b6a.scope - libcontainer container b479648d4234e590f5d02c5d1a7029c334bdf2a61784fd1a571f45282fed3b6a. Feb 13 19:17:50.805958 systemd[1]: Started cri-containerd-caf74b0205da03d6746d52f2e564adc3e0beeec8da8eda09d5a9a38693744759.scope - libcontainer container caf74b0205da03d6746d52f2e564adc3e0beeec8da8eda09d5a9a38693744759. Feb 13 19:17:50.835639 kubelet[2148]: W0213 19:17:50.835423 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Feb 13 19:17:50.835639 kubelet[2148]: E0213 19:17:50.835501 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:50.873400 containerd[1457]: time="2025-02-13T19:17:50.872421358Z" level=info msg="StartContainer for \"557ab75955aa20ae7833cbcfd7d8af763fd5070a6535b89f8c6baff93e28e131\" returns successfully" Feb 13 19:17:50.873400 containerd[1457]: time="2025-02-13T19:17:50.872579094Z" level=info msg="StartContainer for \"caf74b0205da03d6746d52f2e564adc3e0beeec8da8eda09d5a9a38693744759\" returns successfully" Feb 13 19:17:50.873400 containerd[1457]: time="2025-02-13T19:17:50.872606504Z" level=info msg="StartContainer for \"b479648d4234e590f5d02c5d1a7029c334bdf2a61784fd1a571f45282fed3b6a\" returns successfully" Feb 13 19:17:51.019404 kubelet[2148]: E0213 19:17:51.018900 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="1.6s" Feb 13 19:17:51.240700 kubelet[2148]: I0213 19:17:51.240596 2148 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:51.654487 kubelet[2148]: E0213 19:17:51.654369 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:51.655950 kubelet[2148]: E0213 19:17:51.655924 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:51.657694 kubelet[2148]: E0213 19:17:51.657670 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:52.601334 kubelet[2148]: I0213 19:17:52.601290 2148 apiserver.go:52] "Watching apiserver" Feb 13 19:17:52.615349 kubelet[2148]: I0213 19:17:52.615304 2148 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:17:52.622936 kubelet[2148]: E0213 19:17:52.622825 2148 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:17:52.660463 kubelet[2148]: E0213 19:17:52.660437 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:52.716106 kubelet[2148]: I0213 19:17:52.716052 2148 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:17:52.716106 kubelet[2148]: E0213 19:17:52.716085 2148 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:17:54.542758 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Feb 13 19:17:54.542774 systemd[1]: Reloading... Feb 13 19:17:54.614501 zram_generator::config[2472]: No configuration found. Feb 13 19:17:54.691150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:54.757442 systemd[1]: Reloading finished in 214 ms. Feb 13 19:17:54.791782 kubelet[2148]: I0213 19:17:54.791442 2148 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:54.791629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:54.804969 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:17:54.805296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:54.817060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:54.914927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:54.919502 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:17:54.958190 kubelet[2508]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:54.958190 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:17:54.958190 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:54.958541 kubelet[2508]: I0213 19:17:54.958243 2508 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:17:54.965079 kubelet[2508]: I0213 19:17:54.965042 2508 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:17:54.965079 kubelet[2508]: I0213 19:17:54.965074 2508 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:17:54.965429 kubelet[2508]: I0213 19:17:54.965320 2508 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:17:54.966740 kubelet[2508]: I0213 19:17:54.966719 2508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:17:54.968925 kubelet[2508]: I0213 19:17:54.968849 2508 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:54.972177 kubelet[2508]: E0213 19:17:54.972126 2508 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:17:54.972177 kubelet[2508]: I0213 19:17:54.972159 2508 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:17:54.974585 kubelet[2508]: I0213 19:17:54.974565 2508 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:17:54.974718 kubelet[2508]: I0213 19:17:54.974704 2508 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:17:54.974830 kubelet[2508]: I0213 19:17:54.974804 2508 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:17:54.975002 kubelet[2508]: I0213 19:17:54.974834 2508 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:17:54.975078 kubelet[2508]: I0213 19:17:54.975014 2508 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:17:54.975078 kubelet[2508]: I0213 19:17:54.975024 2508 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:17:54.975078 kubelet[2508]: I0213 19:17:54.975054 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:54.975172 kubelet[2508]: I0213 19:17:54.975160 2508 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:17:54.975199 kubelet[2508]: I0213 19:17:54.975177 2508 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:17:54.975199 kubelet[2508]: I0213 19:17:54.975197 2508 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:17:54.975235 kubelet[2508]: I0213 19:17:54.975208 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:17:54.976396 kubelet[2508]: I0213 19:17:54.976341 2508 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:17:54.977026 kubelet[2508]: I0213 19:17:54.976993 2508 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:17:54.977448 kubelet[2508]: I0213 19:17:54.977421 2508 server.go:1269] "Started kubelet" Feb 13 19:17:54.978265 kubelet[2508]: I0213 19:17:54.978213 2508 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:17:54.978549 kubelet[2508]: I0213 19:17:54.978520 2508 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:17:54.978657 kubelet[2508]: I0213 19:17:54.978629 2508 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:17:54.979947 kubelet[2508]: I0213 19:17:54.979920 2508 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:17:54.982901 kubelet[2508]: I0213 19:17:54.982869 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:17:54.985023 kubelet[2508]: I0213 19:17:54.983173 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:17:54.988330 kubelet[2508]: I0213 19:17:54.988299 2508 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:17:54.988585 kubelet[2508]: E0213 19:17:54.988555 2508 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:54.992111 kubelet[2508]: I0213 19:17:54.992075 2508 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:17:54.995428 kubelet[2508]: I0213 19:17:54.992508 2508 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:17:54.995428 kubelet[2508]: I0213 19:17:54.992598 2508 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:17:54.998100 kubelet[2508]: I0213 19:17:54.997293 2508 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:17:54.999175 kubelet[2508]: E0213 19:17:54.999152 2508 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:17:55.001773 kubelet[2508]: I0213 19:17:55.001726 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:17:55.002418 kubelet[2508]: I0213 19:17:55.002179 2508 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:17:55.002976 kubelet[2508]: I0213 19:17:55.002947 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:17:55.002976 kubelet[2508]: I0213 19:17:55.002976 2508 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:17:55.003084 kubelet[2508]: I0213 19:17:55.002995 2508 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:17:55.003084 kubelet[2508]: E0213 19:17:55.003043 2508 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:17:55.035575 kubelet[2508]: I0213 19:17:55.035546 2508 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:17:55.035575 kubelet[2508]: I0213 19:17:55.035569 2508 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:17:55.035736 kubelet[2508]: I0213 19:17:55.035593 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:55.035772 kubelet[2508]: I0213 19:17:55.035754 2508 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:17:55.035799 kubelet[2508]: I0213 19:17:55.035765 2508 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:17:55.035799 kubelet[2508]: I0213 19:17:55.035783 2508 policy_none.go:49] "None policy: Start" Feb 13 19:17:55.036354 kubelet[2508]: I0213 19:17:55.036337 2508 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:17:55.036443 kubelet[2508]: I0213 19:17:55.036362 2508 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:17:55.036561 kubelet[2508]: I0213 19:17:55.036546 2508 state_mem.go:75] "Updated machine memory state" Feb 13 19:17:55.041112 kubelet[2508]: I0213 19:17:55.041085 2508 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:17:55.041452 kubelet[2508]: I0213 19:17:55.041257 2508 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:17:55.041452 kubelet[2508]: I0213 19:17:55.041275 2508 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:17:55.041546 kubelet[2508]: I0213 19:17:55.041506 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:17:55.108951 kubelet[2508]: E0213 19:17:55.108837 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:55.145898 kubelet[2508]: I0213 19:17:55.145864 2508 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:55.155911 kubelet[2508]: I0213 19:17:55.155873 2508 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:17:55.156003 kubelet[2508]: I0213 19:17:55.155964 2508 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:17:55.201011 kubelet[2508]: I0213 19:17:55.200961 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:55.201011 kubelet[2508]: I0213 19:17:55.201012 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:55.201179 kubelet[2508]: I0213 19:17:55.201042 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:55.201179 kubelet[2508]: I0213 19:17:55.201061 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:55.201179 kubelet[2508]: I0213 19:17:55.201076 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:55.201179 kubelet[2508]: I0213 19:17:55.201114 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:55.201179 kubelet[2508]: I0213 19:17:55.201151 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d434001f97402e3103d90d652bc69f0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d434001f97402e3103d90d652bc69f0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:55.201299 kubelet[2508]: I0213 19:17:55.201178 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:55.201299 kubelet[2508]: I0213 19:17:55.201200 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:17:55.409504 kubelet[2508]: E0213 19:17:55.409196 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:55.409504 kubelet[2508]: E0213 19:17:55.409255 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:55.409633 kubelet[2508]: E0213 19:17:55.409530 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:55.555005 sudo[2546]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:17:55.555287 sudo[2546]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:17:55.976158 kubelet[2508]: I0213 19:17:55.976123 2508 apiserver.go:52] "Watching apiserver" Feb 13 19:17:55.985404 sudo[2546]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:55.992440 kubelet[2508]: I0213 19:17:55.992301 2508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:17:56.016293 kubelet[2508]: E0213 19:17:56.016004 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:56.022372 kubelet[2508]: E0213 19:17:56.021677 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:17:56.022372 kubelet[2508]: E0213 19:17:56.021835 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:56.022372 kubelet[2508]: E0213 19:17:56.021677 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:56.022372 kubelet[2508]: E0213 19:17:56.022108 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:56.044608 kubelet[2508]: I0213 19:17:56.044527 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.044511034 podStartE2EDuration="2.044511034s" podCreationTimestamp="2025-02-13 19:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:56.043512422 +0000 UTC m=+1.120566862" watchObservedRunningTime="2025-02-13 19:17:56.044511034 +0000 UTC m=+1.121565474" Feb 13 19:17:56.044752 kubelet[2508]: I0213 19:17:56.044677 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.044671801 podStartE2EDuration="1.044671801s" podCreationTimestamp="2025-02-13 19:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:56.036483726 +0000 UTC m=+1.113538166" watchObservedRunningTime="2025-02-13 19:17:56.044671801 +0000 UTC m=+1.121726241" Feb 13 19:17:57.016810 kubelet[2508]: E0213 19:17:57.016770 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:57.017170 kubelet[2508]: E0213 19:17:57.016881 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:58.141322 sudo[1628]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:58.142487 sshd[1627]: Connection closed by 10.0.0.1 port 51858 Feb 13 19:17:58.142846 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:58.145231 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:17:58.145430 systemd[1]: session-7.scope: Consumed 8.422s CPU time, 155.7M memory peak, 0B memory swap peak. Feb 13 19:17:58.145992 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:51858.service: Deactivated successfully. Feb 13 19:17:58.152136 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:17:58.153056 systemd-logind[1435]: Removed session 7. Feb 13 19:17:58.528236 kubelet[2508]: E0213 19:17:58.528121 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:17:59.977673 kubelet[2508]: I0213 19:17:59.977640 2508 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:17:59.978608 kubelet[2508]: I0213 19:17:59.978160 2508 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:17:59.978668 containerd[1457]: time="2025-02-13T19:17:59.977950661Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:18:00.946331 kubelet[2508]: I0213 19:18:00.946051 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.946034508 podStartE2EDuration="5.946034508s" podCreationTimestamp="2025-02-13 19:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:56.051864145 +0000 UTC m=+1.128918585" watchObservedRunningTime="2025-02-13 19:18:00.946034508 +0000 UTC m=+6.023088988" Feb 13 19:18:00.953631 systemd[1]: Created slice kubepods-besteffort-podc251f23e_86eb_4164_9040_e8d39b129217.slice - libcontainer container kubepods-besteffort-podc251f23e_86eb_4164_9040_e8d39b129217.slice. Feb 13 19:18:00.968875 systemd[1]: Created slice kubepods-burstable-podbffe49f9_8b8e_49b8_9866_39a95ea951b7.slice - libcontainer container kubepods-burstable-podbffe49f9_8b8e_49b8_9866_39a95ea951b7.slice. Feb 13 19:18:01.039450 kubelet[2508]: I0213 19:18:01.039396 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c251f23e-86eb-4164-9040-e8d39b129217-lib-modules\") pod \"kube-proxy-mbxj5\" (UID: \"c251f23e-86eb-4164-9040-e8d39b129217\") " pod="kube-system/kube-proxy-mbxj5" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039452 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hostproc\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039494 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4jw\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-kube-api-access-4d4jw\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039511 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-lib-modules\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039566 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bffe49f9-8b8e-49b8-9866-39a95ea951b7-clustermesh-secrets\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039586 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-kernel\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039776 kubelet[2508]: I0213 19:18:01.039601 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-xtables-lock\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039655 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-run\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039669 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-etc-cni-netd\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039712 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-net\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039730 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4j7\" (UniqueName: \"kubernetes.io/projected/c251f23e-86eb-4164-9040-e8d39b129217-kube-api-access-sf4j7\") pod \"kube-proxy-mbxj5\" (UID: \"c251f23e-86eb-4164-9040-e8d39b129217\") " pod="kube-system/kube-proxy-mbxj5" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039745 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-config-path\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.039983 kubelet[2508]: I0213 19:18:01.039761 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cni-path\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.040220 kubelet[2508]: I0213 19:18:01.039799 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c251f23e-86eb-4164-9040-e8d39b129217-kube-proxy\") pod \"kube-proxy-mbxj5\" (UID: \"c251f23e-86eb-4164-9040-e8d39b129217\") " pod="kube-system/kube-proxy-mbxj5" Feb 13 19:18:01.040220 kubelet[2508]: I0213 19:18:01.039817 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c251f23e-86eb-4164-9040-e8d39b129217-xtables-lock\") pod \"kube-proxy-mbxj5\" (UID: \"c251f23e-86eb-4164-9040-e8d39b129217\") " pod="kube-system/kube-proxy-mbxj5" Feb 13 19:18:01.040220 kubelet[2508]: I0213 19:18:01.039857 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-bpf-maps\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.040220 kubelet[2508]: I0213 19:18:01.039877 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-cgroup\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.040220 kubelet[2508]: I0213 19:18:01.039900 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hubble-tls\") pod \"cilium-fjt5s\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " pod="kube-system/cilium-fjt5s" Feb 13 19:18:01.100183 systemd[1]: Created slice kubepods-besteffort-pod25c8a244_0d3a_45b7_bd53_9927e1593130.slice - libcontainer container kubepods-besteffort-pod25c8a244_0d3a_45b7_bd53_9927e1593130.slice. Feb 13 19:18:01.141206 kubelet[2508]: I0213 19:18:01.141111 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25c8a244-0d3a-45b7-bd53-9927e1593130-cilium-config-path\") pod \"cilium-operator-5d85765b45-r8288\" (UID: \"25c8a244-0d3a-45b7-bd53-9927e1593130\") " pod="kube-system/cilium-operator-5d85765b45-r8288" Feb 13 19:18:01.142653 kubelet[2508]: I0213 19:18:01.142087 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9pj\" (UniqueName: \"kubernetes.io/projected/25c8a244-0d3a-45b7-bd53-9927e1593130-kube-api-access-kc9pj\") pod \"cilium-operator-5d85765b45-r8288\" (UID: \"25c8a244-0d3a-45b7-bd53-9927e1593130\") " pod="kube-system/cilium-operator-5d85765b45-r8288" Feb 13 19:18:01.264057 kubelet[2508]: E0213 19:18:01.263909 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:01.265419 containerd[1457]: time="2025-02-13T19:18:01.265206462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbxj5,Uid:c251f23e-86eb-4164-9040-e8d39b129217,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:01.271570 kubelet[2508]: E0213 19:18:01.271541 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:01.272121 containerd[1457]: time="2025-02-13T19:18:01.272031846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fjt5s,Uid:bffe49f9-8b8e-49b8-9866-39a95ea951b7,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:01.291228 containerd[1457]: time="2025-02-13T19:18:01.290708948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:01.291228 containerd[1457]: time="2025-02-13T19:18:01.291081801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:01.291228 containerd[1457]: time="2025-02-13T19:18:01.291094044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.291228 containerd[1457]: time="2025-02-13T19:18:01.291186707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.310588 systemd[1]: Started cri-containerd-b303acb2eb2815cbe6beebbc5c3f256d8d0297856611c3e08287008e596336e8.scope - libcontainer container b303acb2eb2815cbe6beebbc5c3f256d8d0297856611c3e08287008e596336e8. Feb 13 19:18:01.312364 containerd[1457]: time="2025-02-13T19:18:01.312061478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:01.312364 containerd[1457]: time="2025-02-13T19:18:01.312145739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:01.312364 containerd[1457]: time="2025-02-13T19:18:01.312175867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.312553 containerd[1457]: time="2025-02-13T19:18:01.312303819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.333567 systemd[1]: Started cri-containerd-e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5.scope - libcontainer container e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5. Feb 13 19:18:01.335106 containerd[1457]: time="2025-02-13T19:18:01.335066221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbxj5,Uid:c251f23e-86eb-4164-9040-e8d39b129217,Namespace:kube-system,Attempt:0,} returns sandbox id \"b303acb2eb2815cbe6beebbc5c3f256d8d0297856611c3e08287008e596336e8\"" Feb 13 19:18:01.335936 kubelet[2508]: E0213 19:18:01.335885 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:01.339130 containerd[1457]: time="2025-02-13T19:18:01.339083264Z" level=info msg="CreateContainer within sandbox \"b303acb2eb2815cbe6beebbc5c3f256d8d0297856611c3e08287008e596336e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:18:01.361744 containerd[1457]: time="2025-02-13T19:18:01.361694428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fjt5s,Uid:bffe49f9-8b8e-49b8-9866-39a95ea951b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\"" Feb 13 19:18:01.363614 kubelet[2508]: E0213 19:18:01.362523 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:01.365132 containerd[1457]: time="2025-02-13T19:18:01.365097558Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:18:01.403631 kubelet[2508]: E0213 19:18:01.403469 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:01.404347 containerd[1457]: time="2025-02-13T19:18:01.404300264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r8288,Uid:25c8a244-0d3a-45b7-bd53-9927e1593130,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:01.417253 containerd[1457]: time="2025-02-13T19:18:01.417212768Z" level=info msg="CreateContainer within sandbox \"b303acb2eb2815cbe6beebbc5c3f256d8d0297856611c3e08287008e596336e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2a44e20be4118e741af548270595ba38cf24a7b522269d2affa1a5cd19ad1e1\"" Feb 13 19:18:01.418923 containerd[1457]: time="2025-02-13T19:18:01.417709132Z" level=info msg="StartContainer for \"f2a44e20be4118e741af548270595ba38cf24a7b522269d2affa1a5cd19ad1e1\"" Feb 13 19:18:01.427799 containerd[1457]: time="2025-02-13T19:18:01.427714829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:01.427915 containerd[1457]: time="2025-02-13T19:18:01.427778005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:01.427915 containerd[1457]: time="2025-02-13T19:18:01.427789488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.427915 containerd[1457]: time="2025-02-13T19:18:01.427875469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:01.445607 systemd[1]: Started cri-containerd-5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7.scope - libcontainer container 5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7. Feb 13 19:18:01.459555 systemd[1]: Started cri-containerd-f2a44e20be4118e741af548270595ba38cf24a7b522269d2affa1a5cd19ad1e1.scope - libcontainer container f2a44e20be4118e741af548270595ba38cf24a7b522269d2affa1a5cd19ad1e1. Feb 13 19:18:01.516927 containerd[1457]: time="2025-02-13T19:18:01.516827195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r8288,Uid:25c8a244-0d3a-45b7-bd53-9927e1593130,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\"" Feb 13 19:18:01.518143 containerd[1457]: time="2025-02-13T19:18:01.518007449Z" level=info msg="StartContainer for \"f2a44e20be4118e741af548270595ba38cf24a7b522269d2affa1a5cd19ad1e1\" returns successfully" Feb 13 19:18:01.520589 kubelet[2508]: E0213 19:18:01.520556 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:02.030452 kubelet[2508]: E0213 19:18:02.029842 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:03.129994 kubelet[2508]: E0213 19:18:03.129790 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:03.149912 kubelet[2508]: I0213 19:18:03.149853 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbxj5" podStartSLOduration=3.149833645 podStartE2EDuration="3.149833645s" podCreationTimestamp="2025-02-13 19:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:18:02.040566947 +0000 UTC m=+7.117621387" watchObservedRunningTime="2025-02-13 19:18:03.149833645 +0000 UTC m=+8.226888085" Feb 13 19:18:04.036743 kubelet[2508]: E0213 19:18:04.036708 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:05.887879 kubelet[2508]: E0213 19:18:05.887830 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:07.625307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551507432.mount: Deactivated successfully. Feb 13 19:18:08.555828 kubelet[2508]: E0213 19:18:08.555778 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:09.214176 containerd[1457]: time="2025-02-13T19:18:09.213771645Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:09.217136 containerd[1457]: time="2025-02-13T19:18:09.216298694Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:18:09.218947 containerd[1457]: time="2025-02-13T19:18:09.218605861Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:09.220056 containerd[1457]: time="2025-02-13T19:18:09.220019215Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.854886208s" Feb 13 19:18:09.220056 containerd[1457]: time="2025-02-13T19:18:09.220053221Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:18:09.225298 containerd[1457]: time="2025-02-13T19:18:09.224540210Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:18:09.225562 containerd[1457]: time="2025-02-13T19:18:09.225518720Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:18:09.242231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482960646.mount: Deactivated successfully. Feb 13 19:18:09.246246 containerd[1457]: time="2025-02-13T19:18:09.246150635Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\"" Feb 13 19:18:09.246785 containerd[1457]: time="2025-02-13T19:18:09.246750271Z" level=info msg="StartContainer for \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\"" Feb 13 19:18:09.275583 systemd[1]: Started cri-containerd-f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4.scope - libcontainer container f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4. Feb 13 19:18:09.350171 containerd[1457]: time="2025-02-13T19:18:09.350084562Z" level=info msg="StartContainer for \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\" returns successfully" Feb 13 19:18:09.368403 systemd[1]: cri-containerd-f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4.scope: Deactivated successfully. Feb 13 19:18:09.487589 containerd[1457]: time="2025-02-13T19:18:09.484902510Z" level=info msg="shim disconnected" id=f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4 namespace=k8s.io Feb 13 19:18:09.487768 containerd[1457]: time="2025-02-13T19:18:09.487745980Z" level=warning msg="cleaning up after shim disconnected" id=f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4 namespace=k8s.io Feb 13 19:18:09.487821 containerd[1457]: time="2025-02-13T19:18:09.487809433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:10.063769 kubelet[2508]: E0213 19:18:10.063591 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:10.066445 containerd[1457]: time="2025-02-13T19:18:10.066403080Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:18:10.094012 containerd[1457]: time="2025-02-13T19:18:10.093884835Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\"" Feb 13 19:18:10.094596 containerd[1457]: time="2025-02-13T19:18:10.094563123Z" level=info msg="StartContainer for \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\"" Feb 13 19:18:10.124579 systemd[1]: Started cri-containerd-7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f.scope - libcontainer container 7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f. Feb 13 19:18:10.151065 containerd[1457]: time="2025-02-13T19:18:10.150999630Z" level=info msg="StartContainer for \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\" returns successfully" Feb 13 19:18:10.165803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:18:10.166656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:10.166800 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:10.175708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:10.175903 systemd[1]: cri-containerd-7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f.scope: Deactivated successfully. Feb 13 19:18:10.192612 containerd[1457]: time="2025-02-13T19:18:10.192550105Z" level=info msg="shim disconnected" id=7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f namespace=k8s.io Feb 13 19:18:10.192612 containerd[1457]: time="2025-02-13T19:18:10.192613437Z" level=warning msg="cleaning up after shim disconnected" id=7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f namespace=k8s.io Feb 13 19:18:10.193043 containerd[1457]: time="2025-02-13T19:18:10.192625159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:10.219033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:10.241112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4-rootfs.mount: Deactivated successfully. Feb 13 19:18:11.067337 kubelet[2508]: E0213 19:18:11.067219 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:11.069261 containerd[1457]: time="2025-02-13T19:18:11.069205565Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:18:11.115929 containerd[1457]: time="2025-02-13T19:18:11.115784710Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\"" Feb 13 19:18:11.117157 containerd[1457]: time="2025-02-13T19:18:11.116492879Z" level=info msg="StartContainer for \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\"" Feb 13 19:18:11.117905 update_engine[1437]: I20250213 19:18:11.117434 1437 update_attempter.cc:509] Updating boot flags... Feb 13 19:18:11.150393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3048) Feb 13 19:18:11.180716 systemd[1]: Started cri-containerd-22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8.scope - libcontainer container 22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8. Feb 13 19:18:11.190460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3048) Feb 13 19:18:11.242853 containerd[1457]: time="2025-02-13T19:18:11.242808876Z" level=info msg="StartContainer for \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\" returns successfully" Feb 13 19:18:11.243793 systemd[1]: cri-containerd-22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8.scope: Deactivated successfully. Feb 13 19:18:11.260527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8-rootfs.mount: Deactivated successfully. Feb 13 19:18:11.270358 containerd[1457]: time="2025-02-13T19:18:11.270229059Z" level=info msg="shim disconnected" id=22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8 namespace=k8s.io Feb 13 19:18:11.270358 containerd[1457]: time="2025-02-13T19:18:11.270282309Z" level=warning msg="cleaning up after shim disconnected" id=22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8 namespace=k8s.io Feb 13 19:18:11.270358 containerd[1457]: time="2025-02-13T19:18:11.270291070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:12.070190 kubelet[2508]: E0213 19:18:12.070163 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:12.071912 containerd[1457]: time="2025-02-13T19:18:12.071880912Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:18:12.094030 containerd[1457]: time="2025-02-13T19:18:12.093967920Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\"" Feb 13 19:18:12.094794 containerd[1457]: time="2025-02-13T19:18:12.094762860Z" level=info msg="StartContainer for \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\"" Feb 13 19:18:12.120605 systemd[1]: Started cri-containerd-be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401.scope - libcontainer container be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401. Feb 13 19:18:12.140734 systemd[1]: cri-containerd-be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401.scope: Deactivated successfully. Feb 13 19:18:12.144149 containerd[1457]: time="2025-02-13T19:18:12.144090585Z" level=info msg="StartContainer for \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\" returns successfully" Feb 13 19:18:12.168088 containerd[1457]: time="2025-02-13T19:18:12.168007796Z" level=info msg="shim disconnected" id=be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401 namespace=k8s.io Feb 13 19:18:12.168088 containerd[1457]: time="2025-02-13T19:18:12.168083849Z" level=warning msg="cleaning up after shim disconnected" id=be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401 namespace=k8s.io Feb 13 19:18:12.168088 containerd[1457]: time="2025-02-13T19:18:12.168094491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:12.260718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401-rootfs.mount: Deactivated successfully. Feb 13 19:18:13.076408 kubelet[2508]: E0213 19:18:13.074275 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:13.079975 containerd[1457]: time="2025-02-13T19:18:13.079941040Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:18:13.104486 containerd[1457]: time="2025-02-13T19:18:13.104445180Z" level=info msg="CreateContainer within sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\"" Feb 13 19:18:13.104965 containerd[1457]: time="2025-02-13T19:18:13.104934303Z" level=info msg="StartContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\"" Feb 13 19:18:13.136610 systemd[1]: Started cri-containerd-f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1.scope - libcontainer container f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1. Feb 13 19:18:13.168586 containerd[1457]: time="2025-02-13T19:18:13.168545793Z" level=info msg="StartContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" returns successfully" Feb 13 19:18:13.336265 kubelet[2508]: I0213 19:18:13.335958 2508 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:18:13.374894 systemd[1]: Created slice kubepods-burstable-pod2a5969e3_5a11_4248_b841_6da27bd664ca.slice - libcontainer container kubepods-burstable-pod2a5969e3_5a11_4248_b841_6da27bd664ca.slice. Feb 13 19:18:13.383632 systemd[1]: Created slice kubepods-burstable-pod29327406_3512_4f7c_8b7f_ebed2818e4a6.slice - libcontainer container kubepods-burstable-pod29327406_3512_4f7c_8b7f_ebed2818e4a6.slice. Feb 13 19:18:13.436497 kubelet[2508]: I0213 19:18:13.436405 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a5969e3-5a11-4248-b841-6da27bd664ca-config-volume\") pod \"coredns-6f6b679f8f-8lhvg\" (UID: \"2a5969e3-5a11-4248-b841-6da27bd664ca\") " pod="kube-system/coredns-6f6b679f8f-8lhvg" Feb 13 19:18:13.436497 kubelet[2508]: I0213 19:18:13.436450 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29327406-3512-4f7c-8b7f-ebed2818e4a6-config-volume\") pod \"coredns-6f6b679f8f-vbdjc\" (UID: \"29327406-3512-4f7c-8b7f-ebed2818e4a6\") " pod="kube-system/coredns-6f6b679f8f-vbdjc" Feb 13 19:18:13.436497 kubelet[2508]: I0213 19:18:13.436471 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjn9f\" (UniqueName: \"kubernetes.io/projected/29327406-3512-4f7c-8b7f-ebed2818e4a6-kube-api-access-tjn9f\") pod \"coredns-6f6b679f8f-vbdjc\" (UID: \"29327406-3512-4f7c-8b7f-ebed2818e4a6\") " pod="kube-system/coredns-6f6b679f8f-vbdjc" Feb 13 19:18:13.436497 kubelet[2508]: I0213 19:18:13.436494 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdg9d\" (UniqueName: \"kubernetes.io/projected/2a5969e3-5a11-4248-b841-6da27bd664ca-kube-api-access-pdg9d\") pod \"coredns-6f6b679f8f-8lhvg\" (UID: \"2a5969e3-5a11-4248-b841-6da27bd664ca\") " pod="kube-system/coredns-6f6b679f8f-8lhvg" Feb 13 19:18:13.680741 kubelet[2508]: E0213 19:18:13.680540 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:13.682924 containerd[1457]: time="2025-02-13T19:18:13.682835030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lhvg,Uid:2a5969e3-5a11-4248-b841-6da27bd664ca,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:13.685895 kubelet[2508]: E0213 19:18:13.685867 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:13.686352 containerd[1457]: time="2025-02-13T19:18:13.686302982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vbdjc,Uid:29327406-3512-4f7c-8b7f-ebed2818e4a6,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:14.078637 kubelet[2508]: E0213 19:18:14.078589 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:14.100289 kubelet[2508]: I0213 19:18:14.100223 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fjt5s" podStartSLOduration=6.24124106 podStartE2EDuration="14.10020581s" podCreationTimestamp="2025-02-13 19:18:00 +0000 UTC" firstStartedPulling="2025-02-13 19:18:01.364553662 +0000 UTC m=+6.441608062" lastFinishedPulling="2025-02-13 19:18:09.223518372 +0000 UTC m=+14.300572812" observedRunningTime="2025-02-13 19:18:14.099509215 +0000 UTC m=+19.176563695" watchObservedRunningTime="2025-02-13 19:18:14.10020581 +0000 UTC m=+19.177260250" Feb 13 19:18:14.168012 containerd[1457]: time="2025-02-13T19:18:14.167960406Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:14.168936 containerd[1457]: time="2025-02-13T19:18:14.168340828Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:18:14.169218 containerd[1457]: time="2025-02-13T19:18:14.169171526Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:14.170615 containerd[1457]: time="2025-02-13T19:18:14.170578198Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.946005782s" Feb 13 19:18:14.170663 containerd[1457]: time="2025-02-13T19:18:14.170617205Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:18:14.172597 containerd[1457]: time="2025-02-13T19:18:14.172547964Z" level=info msg="CreateContainer within sandbox \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:18:14.182805 containerd[1457]: time="2025-02-13T19:18:14.182754890Z" level=info msg="CreateContainer within sandbox \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\"" Feb 13 19:18:14.184376 containerd[1457]: time="2025-02-13T19:18:14.183474889Z" level=info msg="StartContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\"" Feb 13 19:18:14.226610 systemd[1]: Started cri-containerd-dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f.scope - libcontainer container dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f. Feb 13 19:18:14.291608 containerd[1457]: time="2025-02-13T19:18:14.291487296Z" level=info msg="StartContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" returns successfully" Feb 13 19:18:15.081296 kubelet[2508]: E0213 19:18:15.081253 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:15.081723 kubelet[2508]: E0213 19:18:15.081320 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:16.083116 kubelet[2508]: E0213 19:18:16.082877 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:16.083116 kubelet[2508]: E0213 19:18:16.083051 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:18.232556 systemd-networkd[1392]: cilium_host: Link UP Feb 13 19:18:18.234136 systemd-networkd[1392]: cilium_net: Link UP Feb 13 19:18:18.234146 systemd-networkd[1392]: cilium_net: Gained carrier Feb 13 19:18:18.234388 systemd-networkd[1392]: cilium_host: Gained carrier Feb 13 19:18:18.325517 systemd-networkd[1392]: cilium_vxlan: Link UP Feb 13 19:18:18.325525 systemd-networkd[1392]: cilium_vxlan: Gained carrier Feb 13 19:18:18.637417 kernel: NET: Registered PF_ALG protocol family Feb 13 19:18:19.076725 systemd-networkd[1392]: cilium_net: Gained IPv6LL Feb 13 19:18:19.076984 systemd-networkd[1392]: cilium_host: Gained IPv6LL Feb 13 19:18:19.259967 systemd-networkd[1392]: lxc_health: Link UP Feb 13 19:18:19.260517 systemd-networkd[1392]: lxc_health: Gained carrier Feb 13 19:18:19.304291 kubelet[2508]: E0213 19:18:19.304245 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:19.328446 kubelet[2508]: I0213 19:18:19.327959 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r8288" podStartSLOduration=5.6781530799999995 podStartE2EDuration="18.327941591s" podCreationTimestamp="2025-02-13 19:18:01 +0000 UTC" firstStartedPulling="2025-02-13 19:18:01.521544452 +0000 UTC m=+6.598598852" lastFinishedPulling="2025-02-13 19:18:14.171332923 +0000 UTC m=+19.248387363" observedRunningTime="2025-02-13 19:18:15.094638995 +0000 UTC m=+20.171693435" watchObservedRunningTime="2025-02-13 19:18:19.327941591 +0000 UTC m=+24.404996031" Feb 13 19:18:19.866964 systemd-networkd[1392]: lxc452d4dcb1734: Link UP Feb 13 19:18:19.878475 systemd-networkd[1392]: lxc06c98e9db4a7: Link UP Feb 13 19:18:19.892488 kernel: eth0: renamed from tmpfca38 Feb 13 19:18:19.905418 kernel: eth0: renamed from tmp928d6 Feb 13 19:18:19.912320 systemd-networkd[1392]: lxc06c98e9db4a7: Gained carrier Feb 13 19:18:19.916525 systemd-networkd[1392]: lxc452d4dcb1734: Gained carrier Feb 13 19:18:20.355580 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Feb 13 19:18:21.123531 systemd-networkd[1392]: lxc_health: Gained IPv6LL Feb 13 19:18:21.187535 systemd-networkd[1392]: lxc452d4dcb1734: Gained IPv6LL Feb 13 19:18:21.394588 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:55100.service - OpenSSH per-connection server daemon (10.0.0.1:55100). Feb 13 19:18:21.437663 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 55100 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:21.439301 sshd-session[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:21.443392 systemd-logind[1435]: New session 8 of user core. Feb 13 19:18:21.455539 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:18:21.508586 systemd-networkd[1392]: lxc06c98e9db4a7: Gained IPv6LL Feb 13 19:18:21.585822 sshd[3745]: Connection closed by 10.0.0.1 port 55100 Feb 13 19:18:21.586156 sshd-session[3743]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:21.588808 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:55100.service: Deactivated successfully. Feb 13 19:18:21.590408 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:18:21.591917 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:18:21.592949 systemd-logind[1435]: Removed session 8. Feb 13 19:18:23.571459 containerd[1457]: time="2025-02-13T19:18:23.570860973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:23.571459 containerd[1457]: time="2025-02-13T19:18:23.571277625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:23.571459 containerd[1457]: time="2025-02-13T19:18:23.571307149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:23.571890 containerd[1457]: time="2025-02-13T19:18:23.571415122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:23.573170 containerd[1457]: time="2025-02-13T19:18:23.573051045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:23.573170 containerd[1457]: time="2025-02-13T19:18:23.573125454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:23.573170 containerd[1457]: time="2025-02-13T19:18:23.573137496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:23.573346 containerd[1457]: time="2025-02-13T19:18:23.573232348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:23.599630 systemd[1]: Started cri-containerd-928d6a16bc917dfbce9c62f8eb06cd4f8921aa785f4eb4c88a96a7356f9672b3.scope - libcontainer container 928d6a16bc917dfbce9c62f8eb06cd4f8921aa785f4eb4c88a96a7356f9672b3. Feb 13 19:18:23.600943 systemd[1]: Started cri-containerd-fca38e66bb75b5c79c02a1470a1c07df95c8078cc018caae2467d664e75a1b0c.scope - libcontainer container fca38e66bb75b5c79c02a1470a1c07df95c8078cc018caae2467d664e75a1b0c. Feb 13 19:18:23.615001 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:23.616143 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:23.641057 containerd[1457]: time="2025-02-13T19:18:23.641018245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lhvg,Uid:2a5969e3-5a11-4248-b841-6da27bd664ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"928d6a16bc917dfbce9c62f8eb06cd4f8921aa785f4eb4c88a96a7356f9672b3\"" Feb 13 19:18:23.641910 kubelet[2508]: E0213 19:18:23.641888 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:23.644835 containerd[1457]: time="2025-02-13T19:18:23.644701222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vbdjc,Uid:29327406-3512-4f7c-8b7f-ebed2818e4a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca38e66bb75b5c79c02a1470a1c07df95c8078cc018caae2467d664e75a1b0c\"" Feb 13 19:18:23.645013 containerd[1457]: time="2025-02-13T19:18:23.644984617Z" level=info msg="CreateContainer within sandbox \"928d6a16bc917dfbce9c62f8eb06cd4f8921aa785f4eb4c88a96a7356f9672b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:18:23.645587 kubelet[2508]: E0213 19:18:23.645564 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:23.647555 containerd[1457]: time="2025-02-13T19:18:23.647523332Z" level=info msg="CreateContainer within sandbox \"fca38e66bb75b5c79c02a1470a1c07df95c8078cc018caae2467d664e75a1b0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:18:23.661411 containerd[1457]: time="2025-02-13T19:18:23.661281001Z" level=info msg="CreateContainer within sandbox \"928d6a16bc917dfbce9c62f8eb06cd4f8921aa785f4eb4c88a96a7356f9672b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eb0d05bfed8ab4a08081ec936707ce9035ceac37ab8a87adb50c4145ea427a8\"" Feb 13 19:18:23.662220 containerd[1457]: time="2025-02-13T19:18:23.662192954Z" level=info msg="StartContainer for \"8eb0d05bfed8ab4a08081ec936707ce9035ceac37ab8a87adb50c4145ea427a8\"" Feb 13 19:18:23.663272 containerd[1457]: time="2025-02-13T19:18:23.663239284Z" level=info msg="CreateContainer within sandbox \"fca38e66bb75b5c79c02a1470a1c07df95c8078cc018caae2467d664e75a1b0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70010ee6768220af1343aeacc0cabfd51e9c760d2a308839686664a853d6f08c\"" Feb 13 19:18:23.665145 containerd[1457]: time="2025-02-13T19:18:23.664901130Z" level=info msg="StartContainer for \"70010ee6768220af1343aeacc0cabfd51e9c760d2a308839686664a853d6f08c\"" Feb 13 19:18:23.694580 systemd[1]: Started cri-containerd-70010ee6768220af1343aeacc0cabfd51e9c760d2a308839686664a853d6f08c.scope - libcontainer container 70010ee6768220af1343aeacc0cabfd51e9c760d2a308839686664a853d6f08c. Feb 13 19:18:23.696219 systemd[1]: Started cri-containerd-8eb0d05bfed8ab4a08081ec936707ce9035ceac37ab8a87adb50c4145ea427a8.scope - libcontainer container 8eb0d05bfed8ab4a08081ec936707ce9035ceac37ab8a87adb50c4145ea427a8. Feb 13 19:18:23.737852 containerd[1457]: time="2025-02-13T19:18:23.737798062Z" level=info msg="StartContainer for \"70010ee6768220af1343aeacc0cabfd51e9c760d2a308839686664a853d6f08c\" returns successfully" Feb 13 19:18:23.738016 containerd[1457]: time="2025-02-13T19:18:23.737890633Z" level=info msg="StartContainer for \"8eb0d05bfed8ab4a08081ec936707ce9035ceac37ab8a87adb50c4145ea427a8\" returns successfully" Feb 13 19:18:24.104982 kubelet[2508]: E0213 19:18:24.104897 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:24.110997 kubelet[2508]: E0213 19:18:24.110956 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:24.148042 kubelet[2508]: I0213 19:18:24.147706 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8lhvg" podStartSLOduration=23.147690507 podStartE2EDuration="23.147690507s" podCreationTimestamp="2025-02-13 19:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:18:24.14746612 +0000 UTC m=+29.224520560" watchObservedRunningTime="2025-02-13 19:18:24.147690507 +0000 UTC m=+29.224744907" Feb 13 19:18:24.160157 kubelet[2508]: I0213 19:18:24.159832 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vbdjc" podStartSLOduration=23.159814006 podStartE2EDuration="23.159814006s" podCreationTimestamp="2025-02-13 19:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:18:24.159023231 +0000 UTC m=+29.236077671" watchObservedRunningTime="2025-02-13 19:18:24.159814006 +0000 UTC m=+29.236868446" Feb 13 19:18:24.576844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019517964.mount: Deactivated successfully. Feb 13 19:18:25.109249 kubelet[2508]: E0213 19:18:25.109216 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:25.109775 kubelet[2508]: E0213 19:18:25.109698 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:26.112661 kubelet[2508]: E0213 19:18:26.112624 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:26.549781 kubelet[2508]: I0213 19:18:26.549737 2508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:18:26.550679 kubelet[2508]: E0213 19:18:26.550316 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:26.601600 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Feb 13 19:18:26.647923 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:26.649664 sshd-session[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:26.653439 systemd-logind[1435]: New session 9 of user core. Feb 13 19:18:26.661580 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:18:26.781711 sshd[3934]: Connection closed by 10.0.0.1 port 59654 Feb 13 19:18:26.781579 sshd-session[3932]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:26.785235 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:59654.service: Deactivated successfully. Feb 13 19:18:26.786918 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:18:26.787586 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:18:26.788409 systemd-logind[1435]: Removed session 9. Feb 13 19:18:27.114855 kubelet[2508]: E0213 19:18:27.114775 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:31.795414 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:59690.service - OpenSSH per-connection server daemon (10.0.0.1:59690). Feb 13 19:18:31.842479 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 59690 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:31.844008 sshd-session[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:31.849194 systemd-logind[1435]: New session 10 of user core. Feb 13 19:18:31.856599 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:18:31.976677 sshd[3953]: Connection closed by 10.0.0.1 port 59690 Feb 13 19:18:31.977341 sshd-session[3951]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:31.981626 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:59690.service: Deactivated successfully. Feb 13 19:18:31.983307 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:18:31.984339 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:18:31.985212 systemd-logind[1435]: Removed session 10. Feb 13 19:18:36.987989 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:45354.service - OpenSSH per-connection server daemon (10.0.0.1:45354). Feb 13 19:18:37.027497 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 45354 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:37.028777 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:37.032345 systemd-logind[1435]: New session 11 of user core. Feb 13 19:18:37.044616 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:18:37.163430 sshd[3968]: Connection closed by 10.0.0.1 port 45354 Feb 13 19:18:37.163766 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:37.167805 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:45354.service: Deactivated successfully. Feb 13 19:18:37.169771 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:18:37.171878 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:18:37.172697 systemd-logind[1435]: Removed session 11. Feb 13 19:18:42.180084 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:45368.service - OpenSSH per-connection server daemon (10.0.0.1:45368). Feb 13 19:18:42.226691 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 45368 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:42.228224 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:42.233202 systemd-logind[1435]: New session 12 of user core. Feb 13 19:18:42.241931 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:18:42.365956 sshd[3983]: Connection closed by 10.0.0.1 port 45368 Feb 13 19:18:42.366664 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:42.379950 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:45368.service: Deactivated successfully. Feb 13 19:18:42.381443 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:18:42.383296 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:18:42.391280 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). Feb 13 19:18:42.392236 systemd-logind[1435]: Removed session 12. Feb 13 19:18:42.427824 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:42.429004 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:42.432961 systemd-logind[1435]: New session 13 of user core. Feb 13 19:18:42.443545 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:18:42.606325 sshd[3998]: Connection closed by 10.0.0.1 port 45374 Feb 13 19:18:42.607020 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:42.620962 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:45374.service: Deactivated successfully. Feb 13 19:18:42.623031 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:18:42.626801 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:18:42.636707 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:49278.service - OpenSSH per-connection server daemon (10.0.0.1:49278). Feb 13 19:18:42.637809 systemd-logind[1435]: Removed session 13. Feb 13 19:18:42.674243 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 49278 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:42.675660 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:42.679922 systemd-logind[1435]: New session 14 of user core. Feb 13 19:18:42.689535 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:18:42.808110 sshd[4010]: Connection closed by 10.0.0.1 port 49278 Feb 13 19:18:42.808211 sshd-session[4008]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:42.812126 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:49278.service: Deactivated successfully. Feb 13 19:18:42.813677 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:18:42.814492 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:18:42.815791 systemd-logind[1435]: Removed session 14. Feb 13 19:18:47.818990 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). Feb 13 19:18:47.862295 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:47.863555 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:47.867460 systemd-logind[1435]: New session 15 of user core. Feb 13 19:18:47.885613 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:18:47.998236 sshd[4024]: Connection closed by 10.0.0.1 port 49294 Feb 13 19:18:47.998602 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:48.001969 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:49294.service: Deactivated successfully. Feb 13 19:18:48.003800 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:18:48.005081 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:18:48.006284 systemd-logind[1435]: Removed session 15. Feb 13 19:18:53.012709 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:35390.service - OpenSSH per-connection server daemon (10.0.0.1:35390). Feb 13 19:18:53.060346 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 35390 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:53.061652 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:53.066871 systemd-logind[1435]: New session 16 of user core. Feb 13 19:18:53.072555 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:18:53.202938 sshd[4039]: Connection closed by 10.0.0.1 port 35390 Feb 13 19:18:53.203286 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:53.212919 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:35390.service: Deactivated successfully. Feb 13 19:18:53.214465 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:18:53.215036 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:18:53.231774 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:35396.service - OpenSSH per-connection server daemon (10.0.0.1:35396). Feb 13 19:18:53.232966 systemd-logind[1435]: Removed session 16. Feb 13 19:18:53.269035 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:53.270218 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:53.274366 systemd-logind[1435]: New session 17 of user core. Feb 13 19:18:53.290570 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:18:53.519219 sshd[4054]: Connection closed by 10.0.0.1 port 35396 Feb 13 19:18:53.520001 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:53.534898 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:35396.service: Deactivated successfully. Feb 13 19:18:53.536500 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:18:53.537747 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:18:53.539123 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:35412.service - OpenSSH per-connection server daemon (10.0.0.1:35412). Feb 13 19:18:53.540041 systemd-logind[1435]: Removed session 17. Feb 13 19:18:53.582869 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 35412 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:53.584178 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:53.588413 systemd-logind[1435]: New session 18 of user core. Feb 13 19:18:53.602590 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:18:54.892982 sshd[4068]: Connection closed by 10.0.0.1 port 35412 Feb 13 19:18:54.893985 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:54.906343 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:35412.service: Deactivated successfully. Feb 13 19:18:54.909038 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:18:54.910944 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:18:54.919144 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:35418.service - OpenSSH per-connection server daemon (10.0.0.1:35418). Feb 13 19:18:54.920368 systemd-logind[1435]: Removed session 18. Feb 13 19:18:54.962938 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 35418 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:54.965024 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:54.970028 systemd-logind[1435]: New session 19 of user core. Feb 13 19:18:54.977540 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:18:55.242225 sshd[4089]: Connection closed by 10.0.0.1 port 35418 Feb 13 19:18:55.242890 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:55.251029 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:35418.service: Deactivated successfully. Feb 13 19:18:55.254169 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:18:55.255564 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:18:55.264236 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:35428.service - OpenSSH per-connection server daemon (10.0.0.1:35428). Feb 13 19:18:55.265625 systemd-logind[1435]: Removed session 19. Feb 13 19:18:55.301821 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 35428 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:55.303088 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:55.307478 systemd-logind[1435]: New session 20 of user core. Feb 13 19:18:55.313578 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:18:55.432643 sshd[4103]: Connection closed by 10.0.0.1 port 35428 Feb 13 19:18:55.433187 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:55.436644 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:35428.service: Deactivated successfully. Feb 13 19:18:55.439004 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:18:55.439853 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:18:55.440877 systemd-logind[1435]: Removed session 20. Feb 13 19:19:00.443856 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:35442.service - OpenSSH per-connection server daemon (10.0.0.1:35442). Feb 13 19:19:00.482691 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 35442 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:00.483997 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:00.487563 systemd-logind[1435]: New session 21 of user core. Feb 13 19:19:00.504261 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:19:00.613641 sshd[4121]: Connection closed by 10.0.0.1 port 35442 Feb 13 19:19:00.614181 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:00.617911 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:35442.service: Deactivated successfully. Feb 13 19:19:00.619561 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:19:00.620988 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:19:00.621974 systemd-logind[1435]: Removed session 21. Feb 13 19:19:05.625491 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:57898.service - OpenSSH per-connection server daemon (10.0.0.1:57898). Feb 13 19:19:05.667577 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 57898 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:05.668974 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:05.673376 systemd-logind[1435]: New session 22 of user core. Feb 13 19:19:05.686589 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:19:05.799963 sshd[4137]: Connection closed by 10.0.0.1 port 57898 Feb 13 19:19:05.801892 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:05.807315 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:57898.service: Deactivated successfully. Feb 13 19:19:05.809493 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:19:05.811087 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:19:05.812051 systemd-logind[1435]: Removed session 22. Feb 13 19:19:06.003742 kubelet[2508]: E0213 19:19:06.003562 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:09.004308 kubelet[2508]: E0213 19:19:09.004244 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:10.814801 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). Feb 13 19:19:10.853595 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:10.854801 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:10.858757 systemd-logind[1435]: New session 23 of user core. Feb 13 19:19:10.866560 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:19:10.977990 sshd[4151]: Connection closed by 10.0.0.1 port 57902 Feb 13 19:19:10.978366 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:10.991934 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:57902.service: Deactivated successfully. Feb 13 19:19:10.993538 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:19:10.994868 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:19:10.999660 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:57904.service - OpenSSH per-connection server daemon (10.0.0.1:57904). Feb 13 19:19:11.000952 systemd-logind[1435]: Removed session 23. Feb 13 19:19:11.035063 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 57904 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:11.036213 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:11.039821 systemd-logind[1435]: New session 24 of user core. Feb 13 19:19:11.049531 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:19:12.003965 kubelet[2508]: E0213 19:19:12.003917 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:13.564667 containerd[1457]: time="2025-02-13T19:19:13.564627476Z" level=info msg="StopContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" with timeout 30 (s)" Feb 13 19:19:13.565579 containerd[1457]: time="2025-02-13T19:19:13.565555099Z" level=info msg="Stop container \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" with signal terminated" Feb 13 19:19:13.576321 systemd[1]: cri-containerd-dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f.scope: Deactivated successfully. Feb 13 19:19:13.597518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f-rootfs.mount: Deactivated successfully. Feb 13 19:19:13.610081 containerd[1457]: time="2025-02-13T19:19:13.610037748Z" level=info msg="StopContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" with timeout 2 (s)" Feb 13 19:19:13.611096 containerd[1457]: time="2025-02-13T19:19:13.610302795Z" level=info msg="Stop container \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" with signal terminated" Feb 13 19:19:13.614103 containerd[1457]: time="2025-02-13T19:19:13.613950608Z" level=info msg="shim disconnected" id=dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f namespace=k8s.io Feb 13 19:19:13.614103 containerd[1457]: time="2025-02-13T19:19:13.614010329Z" level=warning msg="cleaning up after shim disconnected" id=dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f namespace=k8s.io Feb 13 19:19:13.614103 containerd[1457]: time="2025-02-13T19:19:13.614021570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.620423 systemd-networkd[1392]: lxc_health: Link DOWN Feb 13 19:19:13.620428 systemd-networkd[1392]: lxc_health: Lost carrier Feb 13 19:19:13.630109 containerd[1457]: time="2025-02-13T19:19:13.630056497Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:19:13.641981 systemd[1]: cri-containerd-f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1.scope: Deactivated successfully. Feb 13 19:19:13.642811 systemd[1]: cri-containerd-f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1.scope: Consumed 6.721s CPU time. Feb 13 19:19:13.665283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1-rootfs.mount: Deactivated successfully. Feb 13 19:19:13.671844 containerd[1457]: time="2025-02-13T19:19:13.671595551Z" level=info msg="StopContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" returns successfully" Feb 13 19:19:13.672932 containerd[1457]: time="2025-02-13T19:19:13.672883704Z" level=info msg="shim disconnected" id=f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1 namespace=k8s.io Feb 13 19:19:13.672932 containerd[1457]: time="2025-02-13T19:19:13.672928105Z" level=warning msg="cleaning up after shim disconnected" id=f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1 namespace=k8s.io Feb 13 19:19:13.672932 containerd[1457]: time="2025-02-13T19:19:13.672937425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.674391 containerd[1457]: time="2025-02-13T19:19:13.674314580Z" level=info msg="StopPodSandbox for \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\"" Feb 13 19:19:13.678470 containerd[1457]: time="2025-02-13T19:19:13.678256480Z" level=info msg="Container to stop \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.681409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7-shm.mount: Deactivated successfully. Feb 13 19:19:13.687272 systemd[1]: cri-containerd-5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7.scope: Deactivated successfully. Feb 13 19:19:13.689189 containerd[1457]: time="2025-02-13T19:19:13.689146277Z" level=info msg="StopContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" returns successfully" Feb 13 19:19:13.689645 containerd[1457]: time="2025-02-13T19:19:13.689605489Z" level=info msg="StopPodSandbox for \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\"" Feb 13 19:19:13.689645 containerd[1457]: time="2025-02-13T19:19:13.689643610Z" level=info msg="Container to stop \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.689749 containerd[1457]: time="2025-02-13T19:19:13.689654490Z" level=info msg="Container to stop \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.689749 containerd[1457]: time="2025-02-13T19:19:13.689663570Z" level=info msg="Container to stop \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.689749 containerd[1457]: time="2025-02-13T19:19:13.689672010Z" level=info msg="Container to stop \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.689749 containerd[1457]: time="2025-02-13T19:19:13.689680691Z" level=info msg="Container to stop \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.693154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5-shm.mount: Deactivated successfully. Feb 13 19:19:13.702750 systemd[1]: cri-containerd-e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5.scope: Deactivated successfully. Feb 13 19:19:13.715610 containerd[1457]: time="2025-02-13T19:19:13.715535147Z" level=info msg="shim disconnected" id=5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7 namespace=k8s.io Feb 13 19:19:13.715610 containerd[1457]: time="2025-02-13T19:19:13.715597749Z" level=warning msg="cleaning up after shim disconnected" id=5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7 namespace=k8s.io Feb 13 19:19:13.715610 containerd[1457]: time="2025-02-13T19:19:13.715605789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.728359 containerd[1457]: time="2025-02-13T19:19:13.728312151Z" level=info msg="TearDown network for sandbox \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\" successfully" Feb 13 19:19:13.728600 containerd[1457]: time="2025-02-13T19:19:13.728534157Z" level=info msg="StopPodSandbox for \"5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7\" returns successfully" Feb 13 19:19:13.734866 containerd[1457]: time="2025-02-13T19:19:13.734789116Z" level=info msg="shim disconnected" id=e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5 namespace=k8s.io Feb 13 19:19:13.734866 containerd[1457]: time="2025-02-13T19:19:13.734865718Z" level=warning msg="cleaning up after shim disconnected" id=e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5 namespace=k8s.io Feb 13 19:19:13.734866 containerd[1457]: time="2025-02-13T19:19:13.734879758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.746295 containerd[1457]: time="2025-02-13T19:19:13.746254927Z" level=info msg="TearDown network for sandbox \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" successfully" Feb 13 19:19:13.746295 containerd[1457]: time="2025-02-13T19:19:13.746288608Z" level=info msg="StopPodSandbox for \"e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5\" returns successfully" Feb 13 19:19:13.814084 kubelet[2508]: I0213 19:19:13.813842 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hubble-tls\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.814084 kubelet[2508]: I0213 19:19:13.814093 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9pj\" (UniqueName: \"kubernetes.io/projected/25c8a244-0d3a-45b7-bd53-9927e1593130-kube-api-access-kc9pj\") pod \"25c8a244-0d3a-45b7-bd53-9927e1593130\" (UID: \"25c8a244-0d3a-45b7-bd53-9927e1593130\") " Feb 13 19:19:13.814512 kubelet[2508]: I0213 19:19:13.814122 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bffe49f9-8b8e-49b8-9866-39a95ea951b7-clustermesh-secrets\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.814512 kubelet[2508]: I0213 19:19:13.814141 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-xtables-lock\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814156 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-etc-cni-netd\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814855 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-cgroup\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814880 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-bpf-maps\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814898 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-kernel\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814913 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-run\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815467 kubelet[2508]: I0213 19:19:13.814929 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-net\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.814958 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-config-path\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.814975 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25c8a244-0d3a-45b7-bd53-9927e1593130-cilium-config-path\") pod \"25c8a244-0d3a-45b7-bd53-9927e1593130\" (UID: \"25c8a244-0d3a-45b7-bd53-9927e1593130\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.814991 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cni-path\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.815005 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hostproc\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.815021 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4jw\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-kube-api-access-4d4jw\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.815686 kubelet[2508]: I0213 19:19:13.815037 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-lib-modules\") pod \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\" (UID: \"bffe49f9-8b8e-49b8-9866-39a95ea951b7\") " Feb 13 19:19:13.818542 kubelet[2508]: I0213 19:19:13.817727 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818542 kubelet[2508]: I0213 19:19:13.817810 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818542 kubelet[2508]: I0213 19:19:13.817833 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818542 kubelet[2508]: I0213 19:19:13.817845 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818542 kubelet[2508]: I0213 19:19:13.817853 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818726 kubelet[2508]: I0213 19:19:13.817872 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818726 kubelet[2508]: I0213 19:19:13.817880 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818726 kubelet[2508]: I0213 19:19:13.817888 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.818726 kubelet[2508]: I0213 19:19:13.817897 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.819933 kubelet[2508]: I0213 19:19:13.819907 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.829334 kubelet[2508]: I0213 19:19:13.823956 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:19:13.829334 kubelet[2508]: I0213 19:19:13.825777 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8a244-0d3a-45b7-bd53-9927e1593130-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25c8a244-0d3a-45b7-bd53-9927e1593130" (UID: "25c8a244-0d3a-45b7-bd53-9927e1593130"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:19:13.831666 kubelet[2508]: I0213 19:19:13.831592 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:19:13.831816 kubelet[2508]: I0213 19:19:13.831759 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c8a244-0d3a-45b7-bd53-9927e1593130-kube-api-access-kc9pj" (OuterVolumeSpecName: "kube-api-access-kc9pj") pod "25c8a244-0d3a-45b7-bd53-9927e1593130" (UID: "25c8a244-0d3a-45b7-bd53-9927e1593130"). InnerVolumeSpecName "kube-api-access-kc9pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:19:13.831983 kubelet[2508]: I0213 19:19:13.831836 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bffe49f9-8b8e-49b8-9866-39a95ea951b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:19:13.832062 kubelet[2508]: I0213 19:19:13.831936 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-kube-api-access-4d4jw" (OuterVolumeSpecName: "kube-api-access-4d4jw") pod "bffe49f9-8b8e-49b8-9866-39a95ea951b7" (UID: "bffe49f9-8b8e-49b8-9866-39a95ea951b7"). InnerVolumeSpecName "kube-api-access-4d4jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:19:13.916158 kubelet[2508]: I0213 19:19:13.916125 2508 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916400 kubelet[2508]: I0213 19:19:13.916316 2508 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916400 kubelet[2508]: I0213 19:19:13.916334 2508 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916400 kubelet[2508]: I0213 19:19:13.916343 2508 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4d4jw\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-kube-api-access-4d4jw\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916400 kubelet[2508]: I0213 19:19:13.916352 2508 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bffe49f9-8b8e-49b8-9866-39a95ea951b7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916400 kubelet[2508]: I0213 19:19:13.916360 2508 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kc9pj\" (UniqueName: \"kubernetes.io/projected/25c8a244-0d3a-45b7-bd53-9927e1593130-kube-api-access-kc9pj\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916369 2508 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bffe49f9-8b8e-49b8-9866-39a95ea951b7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916555 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916566 2508 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916573 2508 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916581 2508 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916589 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916596 2508 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916700 kubelet[2508]: I0213 19:19:13.916605 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bffe49f9-8b8e-49b8-9866-39a95ea951b7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916936 kubelet[2508]: I0213 19:19:13.916904 2508 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bffe49f9-8b8e-49b8-9866-39a95ea951b7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:13.916936 kubelet[2508]: I0213 19:19:13.916920 2508 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25c8a244-0d3a-45b7-bd53-9927e1593130-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:19:14.209497 kubelet[2508]: I0213 19:19:14.209077 2508 scope.go:117] "RemoveContainer" containerID="dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f" Feb 13 19:19:14.211141 containerd[1457]: time="2025-02-13T19:19:14.210670793Z" level=info msg="RemoveContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\"" Feb 13 19:19:14.213783 containerd[1457]: time="2025-02-13T19:19:14.213746228Z" level=info msg="RemoveContainer for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" returns successfully" Feb 13 19:19:14.214087 kubelet[2508]: I0213 19:19:14.214054 2508 scope.go:117] "RemoveContainer" containerID="dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f" Feb 13 19:19:14.214678 containerd[1457]: time="2025-02-13T19:19:14.214291482Z" level=error msg="ContainerStatus for \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\": not found" Feb 13 19:19:14.215256 systemd[1]: Removed slice kubepods-besteffort-pod25c8a244_0d3a_45b7_bd53_9927e1593130.slice - libcontainer container kubepods-besteffort-pod25c8a244_0d3a_45b7_bd53_9927e1593130.slice. Feb 13 19:19:14.219325 systemd[1]: Removed slice kubepods-burstable-podbffe49f9_8b8e_49b8_9866_39a95ea951b7.slice - libcontainer container kubepods-burstable-podbffe49f9_8b8e_49b8_9866_39a95ea951b7.slice. Feb 13 19:19:14.219433 systemd[1]: kubepods-burstable-podbffe49f9_8b8e_49b8_9866_39a95ea951b7.slice: Consumed 6.863s CPU time. Feb 13 19:19:14.232728 kubelet[2508]: E0213 19:19:14.232682 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\": not found" containerID="dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f" Feb 13 19:19:14.233111 kubelet[2508]: I0213 19:19:14.232900 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f"} err="failed to get container status \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd58833afcc586d36aa9c54b7276156bda19e5dee258e9f3665dd68c40deb05f\": not found" Feb 13 19:19:14.233111 kubelet[2508]: I0213 19:19:14.232999 2508 scope.go:117] "RemoveContainer" containerID="f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1" Feb 13 19:19:14.235338 containerd[1457]: time="2025-02-13T19:19:14.235201516Z" level=info msg="RemoveContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\"" Feb 13 19:19:14.237990 containerd[1457]: time="2025-02-13T19:19:14.237912103Z" level=info msg="RemoveContainer for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" returns successfully" Feb 13 19:19:14.238100 kubelet[2508]: I0213 19:19:14.238078 2508 scope.go:117] "RemoveContainer" containerID="be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401" Feb 13 19:19:14.240134 containerd[1457]: time="2025-02-13T19:19:14.240105997Z" level=info msg="RemoveContainer for \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\"" Feb 13 19:19:14.242214 containerd[1457]: time="2025-02-13T19:19:14.242185168Z" level=info msg="RemoveContainer for \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\" returns successfully" Feb 13 19:19:14.242357 kubelet[2508]: I0213 19:19:14.242340 2508 scope.go:117] "RemoveContainer" containerID="22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8" Feb 13 19:19:14.245303 containerd[1457]: time="2025-02-13T19:19:14.245201002Z" level=info msg="RemoveContainer for \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\"" Feb 13 19:19:14.248845 containerd[1457]: time="2025-02-13T19:19:14.248795890Z" level=info msg="RemoveContainer for \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\" returns successfully" Feb 13 19:19:14.249081 kubelet[2508]: I0213 19:19:14.249053 2508 scope.go:117] "RemoveContainer" containerID="7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f" Feb 13 19:19:14.252047 containerd[1457]: time="2025-02-13T19:19:14.251444555Z" level=info msg="RemoveContainer for \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\"" Feb 13 19:19:14.254324 containerd[1457]: time="2025-02-13T19:19:14.254285785Z" level=info msg="RemoveContainer for \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\" returns successfully" Feb 13 19:19:14.254955 kubelet[2508]: I0213 19:19:14.254930 2508 scope.go:117] "RemoveContainer" containerID="f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4" Feb 13 19:19:14.256861 containerd[1457]: time="2025-02-13T19:19:14.256597762Z" level=info msg="RemoveContainer for \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\"" Feb 13 19:19:14.259141 containerd[1457]: time="2025-02-13T19:19:14.259104704Z" level=info msg="RemoveContainer for \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\" returns successfully" Feb 13 19:19:14.259457 kubelet[2508]: I0213 19:19:14.259438 2508 scope.go:117] "RemoveContainer" containerID="f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1" Feb 13 19:19:14.259674 containerd[1457]: time="2025-02-13T19:19:14.259640397Z" level=error msg="ContainerStatus for \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\": not found" Feb 13 19:19:14.259786 kubelet[2508]: E0213 19:19:14.259765 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\": not found" containerID="f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1" Feb 13 19:19:14.259830 kubelet[2508]: I0213 19:19:14.259794 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1"} err="failed to get container status \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f540c75e548b42e808b7a3bc296c59d8413b11c7fbf9b658da6c34a3bac898c1\": not found" Feb 13 19:19:14.259830 kubelet[2508]: I0213 19:19:14.259817 2508 scope.go:117] "RemoveContainer" containerID="be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401" Feb 13 19:19:14.259999 containerd[1457]: time="2025-02-13T19:19:14.259966685Z" level=error msg="ContainerStatus for \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\": not found" Feb 13 19:19:14.260162 kubelet[2508]: E0213 19:19:14.260143 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\": not found" containerID="be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401" Feb 13 19:19:14.260199 kubelet[2508]: I0213 19:19:14.260167 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401"} err="failed to get container status \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\": rpc error: code = NotFound desc = an error occurred when try to find container \"be70440becdaec0cce2acd151a8ff69150bd92d8d06c291ba6d086147c855401\": not found" Feb 13 19:19:14.260199 kubelet[2508]: I0213 19:19:14.260191 2508 scope.go:117] "RemoveContainer" containerID="22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8" Feb 13 19:19:14.260351 containerd[1457]: time="2025-02-13T19:19:14.260326654Z" level=error msg="ContainerStatus for \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\": not found" Feb 13 19:19:14.260469 kubelet[2508]: E0213 19:19:14.260445 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\": not found" containerID="22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8" Feb 13 19:19:14.260512 kubelet[2508]: I0213 19:19:14.260475 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8"} err="failed to get container status \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"22b16549976c82e7cdb9330f8d6fb2a7a4b517adfdb6b928c856db4728eb41c8\": not found" Feb 13 19:19:14.260512 kubelet[2508]: I0213 19:19:14.260511 2508 scope.go:117] "RemoveContainer" containerID="7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f" Feb 13 19:19:14.260694 containerd[1457]: time="2025-02-13T19:19:14.260665982Z" level=error msg="ContainerStatus for \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\": not found" Feb 13 19:19:14.261617 kubelet[2508]: E0213 19:19:14.261583 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\": not found" containerID="7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f" Feb 13 19:19:14.261617 kubelet[2508]: I0213 19:19:14.261613 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f"} err="failed to get container status \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c52d0ec4a28216d2846d994c74473c4172847234d96379fe5fceecf578d3f\": not found" Feb 13 19:19:14.261701 kubelet[2508]: I0213 19:19:14.261634 2508 scope.go:117] "RemoveContainer" containerID="f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4" Feb 13 19:19:14.261813 containerd[1457]: time="2025-02-13T19:19:14.261786570Z" level=error msg="ContainerStatus for \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\": not found" Feb 13 19:19:14.261944 kubelet[2508]: E0213 19:19:14.261909 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\": not found" containerID="f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4" Feb 13 19:19:14.261944 kubelet[2508]: I0213 19:19:14.261939 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4"} err="failed to get container status \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f489b49e461e3f5919a0b660c4ecbd00c135beb9f5e2370f3805235ed973b7c4\": not found" Feb 13 19:19:14.587737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d099399b5cc60697bd46fc1e7320a1dcecea47e3fa1c0b516811683eb38efe7-rootfs.mount: Deactivated successfully. Feb 13 19:19:14.587844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e26a44292a781b832f44e40198f5914d187a8f87ef496ef8dd084d7042f6faf5-rootfs.mount: Deactivated successfully. Feb 13 19:19:14.587930 systemd[1]: var-lib-kubelet-pods-25c8a244\x2d0d3a\x2d45b7\x2dbd53\x2d9927e1593130-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkc9pj.mount: Deactivated successfully. Feb 13 19:19:14.587988 systemd[1]: var-lib-kubelet-pods-bffe49f9\x2d8b8e\x2d49b8\x2d9866\x2d39a95ea951b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4d4jw.mount: Deactivated successfully. Feb 13 19:19:14.588051 systemd[1]: var-lib-kubelet-pods-bffe49f9\x2d8b8e\x2d49b8\x2d9866\x2d39a95ea951b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:19:14.588104 systemd[1]: var-lib-kubelet-pods-bffe49f9\x2d8b8e\x2d49b8\x2d9866\x2d39a95ea951b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:19:15.006002 kubelet[2508]: I0213 19:19:15.005894 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c8a244-0d3a-45b7-bd53-9927e1593130" path="/var/lib/kubelet/pods/25c8a244-0d3a-45b7-bd53-9927e1593130/volumes" Feb 13 19:19:15.006330 kubelet[2508]: I0213 19:19:15.006306 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" path="/var/lib/kubelet/pods/bffe49f9-8b8e-49b8-9866-39a95ea951b7/volumes" Feb 13 19:19:15.057698 kubelet[2508]: E0213 19:19:15.057649 2508 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:19:15.517904 sshd[4166]: Connection closed by 10.0.0.1 port 57904 Feb 13 19:19:15.518282 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:15.534851 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:57904.service: Deactivated successfully. Feb 13 19:19:15.536754 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:19:15.536988 systemd[1]: session-24.scope: Consumed 1.829s CPU time. Feb 13 19:19:15.538671 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:19:15.547639 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:42336.service - OpenSSH per-connection server daemon (10.0.0.1:42336). Feb 13 19:19:15.549231 systemd-logind[1435]: Removed session 24. Feb 13 19:19:15.582618 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 42336 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:15.583717 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:15.588126 systemd-logind[1435]: New session 25 of user core. Feb 13 19:19:15.594554 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:19:16.736986 kubelet[2508]: I0213 19:19:16.736917 2508 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:19:16Z","lastTransitionTime":"2025-02-13T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:19:16.861535 sshd[4325]: Connection closed by 10.0.0.1 port 42336 Feb 13 19:19:16.862278 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:16.871279 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:42336.service: Deactivated successfully. Feb 13 19:19:16.873324 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:19:16.873567 systemd[1]: session-25.scope: Consumed 1.183s CPU time. Feb 13 19:19:16.876453 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878685 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25c8a244-0d3a-45b7-bd53-9927e1593130" containerName="cilium-operator" Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878716 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="apply-sysctl-overwrites" Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878722 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="cilium-agent" Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878729 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="clean-cilium-state" Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878735 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="mount-cgroup" Feb 13 19:19:16.880063 kubelet[2508]: E0213 19:19:16.878740 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="mount-bpf-fs" Feb 13 19:19:16.880063 kubelet[2508]: I0213 19:19:16.878761 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="bffe49f9-8b8e-49b8-9866-39a95ea951b7" containerName="cilium-agent" Feb 13 19:19:16.880063 kubelet[2508]: I0213 19:19:16.878768 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c8a244-0d3a-45b7-bd53-9927e1593130" containerName="cilium-operator" Feb 13 19:19:16.883274 systemd[1]: Started sshd@25-10.0.0.110:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). Feb 13 19:19:16.887040 systemd-logind[1435]: Removed session 25. Feb 13 19:19:16.899835 systemd[1]: Created slice kubepods-burstable-pod9f757040_c510_4d51_9017_389dcdb4c95e.slice - libcontainer container kubepods-burstable-pod9f757040_c510_4d51_9017_389dcdb4c95e.slice. Feb 13 19:19:16.934796 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:16.935277 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:16.938764 systemd-logind[1435]: New session 26 of user core. Feb 13 19:19:16.942003 kubelet[2508]: I0213 19:19:16.941971 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-host-proc-sys-net\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942080 kubelet[2508]: I0213 19:19:16.942006 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-hostproc\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942080 kubelet[2508]: I0213 19:19:16.942030 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f757040-c510-4d51-9017-389dcdb4c95e-cilium-ipsec-secrets\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942080 kubelet[2508]: I0213 19:19:16.942046 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs5v\" (UniqueName: \"kubernetes.io/projected/9f757040-c510-4d51-9017-389dcdb4c95e-kube-api-access-vvs5v\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942080 kubelet[2508]: I0213 19:19:16.942063 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f757040-c510-4d51-9017-389dcdb4c95e-hubble-tls\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942080 kubelet[2508]: I0213 19:19:16.942078 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-cilium-run\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942095 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f757040-c510-4d51-9017-389dcdb4c95e-cilium-config-path\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942110 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-cni-path\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942123 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-etc-cni-netd\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942138 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-xtables-lock\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942153 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f757040-c510-4d51-9017-389dcdb4c95e-clustermesh-secrets\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942192 kubelet[2508]: I0213 19:19:16.942167 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-host-proc-sys-kernel\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942312 kubelet[2508]: I0213 19:19:16.942183 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-bpf-maps\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942312 kubelet[2508]: I0213 19:19:16.942199 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-cilium-cgroup\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.942312 kubelet[2508]: I0213 19:19:16.942215 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f757040-c510-4d51-9017-389dcdb4c95e-lib-modules\") pod \"cilium-946rw\" (UID: \"9f757040-c510-4d51-9017-389dcdb4c95e\") " pod="kube-system/cilium-946rw" Feb 13 19:19:16.951574 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:19:17.000229 sshd[4338]: Connection closed by 10.0.0.1 port 42342 Feb 13 19:19:17.000750 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:17.008704 systemd[1]: sshd@25-10.0.0.110:22-10.0.0.1:42342.service: Deactivated successfully. Feb 13 19:19:17.010778 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:19:17.013455 systemd-logind[1435]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:19:17.014599 systemd[1]: Started sshd@26-10.0.0.110:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Feb 13 19:19:17.017008 systemd-logind[1435]: Removed session 26. Feb 13 19:19:17.071419 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:19:17.071901 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:17.075281 systemd-logind[1435]: New session 27 of user core. Feb 13 19:19:17.085523 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:19:17.205161 kubelet[2508]: E0213 19:19:17.205119 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.206534 containerd[1457]: time="2025-02-13T19:19:17.206334012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-946rw,Uid:9f757040-c510-4d51-9017-389dcdb4c95e,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:17.226696 containerd[1457]: time="2025-02-13T19:19:17.226602346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:17.226696 containerd[1457]: time="2025-02-13T19:19:17.226655947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:17.226696 containerd[1457]: time="2025-02-13T19:19:17.226676387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.226959 containerd[1457]: time="2025-02-13T19:19:17.226766949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.253606 systemd[1]: Started cri-containerd-5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79.scope - libcontainer container 5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79. Feb 13 19:19:17.287973 containerd[1457]: time="2025-02-13T19:19:17.287911396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-946rw,Uid:9f757040-c510-4d51-9017-389dcdb4c95e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\"" Feb 13 19:19:17.288694 kubelet[2508]: E0213 19:19:17.288666 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.295691 containerd[1457]: time="2025-02-13T19:19:17.294475943Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:19:17.307702 containerd[1457]: time="2025-02-13T19:19:17.307643438Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e\"" Feb 13 19:19:17.311273 containerd[1457]: time="2025-02-13T19:19:17.309792926Z" level=info msg="StartContainer for \"5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e\"" Feb 13 19:19:17.339580 systemd[1]: Started cri-containerd-5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e.scope - libcontainer container 5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e. Feb 13 19:19:17.369639 containerd[1457]: time="2025-02-13T19:19:17.369507501Z" level=info msg="StartContainer for \"5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e\" returns successfully" Feb 13 19:19:17.380422 systemd[1]: cri-containerd-5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e.scope: Deactivated successfully. Feb 13 19:19:17.408782 containerd[1457]: time="2025-02-13T19:19:17.408722058Z" level=info msg="shim disconnected" id=5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e namespace=k8s.io Feb 13 19:19:17.408782 containerd[1457]: time="2025-02-13T19:19:17.408777459Z" level=warning msg="cleaning up after shim disconnected" id=5a7ec6a8f521d591c4c92fa147266ad05fce54ed23f9ef492d41e8d9ab8c050e namespace=k8s.io Feb 13 19:19:17.408782 containerd[1457]: time="2025-02-13T19:19:17.408786179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:18.225421 kubelet[2508]: E0213 19:19:18.225356 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:18.233945 containerd[1457]: time="2025-02-13T19:19:18.233896667Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:19:18.250974 containerd[1457]: time="2025-02-13T19:19:18.250913635Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f\"" Feb 13 19:19:18.251689 containerd[1457]: time="2025-02-13T19:19:18.251576250Z" level=info msg="StartContainer for \"f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f\"" Feb 13 19:19:18.298602 systemd[1]: Started cri-containerd-f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f.scope - libcontainer container f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f. Feb 13 19:19:18.328223 containerd[1457]: time="2025-02-13T19:19:18.328039666Z" level=info msg="StartContainer for \"f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f\" returns successfully" Feb 13 19:19:18.347467 systemd[1]: cri-containerd-f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f.scope: Deactivated successfully. Feb 13 19:19:18.373014 containerd[1457]: time="2025-02-13T19:19:18.372941639Z" level=info msg="shim disconnected" id=f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f namespace=k8s.io Feb 13 19:19:18.373014 containerd[1457]: time="2025-02-13T19:19:18.373007800Z" level=warning msg="cleaning up after shim disconnected" id=f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f namespace=k8s.io Feb 13 19:19:18.373014 containerd[1457]: time="2025-02-13T19:19:18.373017000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:19.046788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f36d9605e6794269743ebfb287cd6920cdee79e8cdcfbe462d61686f0ffb0b9f-rootfs.mount: Deactivated successfully. Feb 13 19:19:19.231104 kubelet[2508]: E0213 19:19:19.230658 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:19.233274 containerd[1457]: time="2025-02-13T19:19:19.233233438Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:19:19.252349 containerd[1457]: time="2025-02-13T19:19:19.252299278Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1\"" Feb 13 19:19:19.253064 containerd[1457]: time="2025-02-13T19:19:19.252820009Z" level=info msg="StartContainer for \"946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1\"" Feb 13 19:19:19.287561 systemd[1]: Started cri-containerd-946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1.scope - libcontainer container 946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1. Feb 13 19:19:19.314896 systemd[1]: cri-containerd-946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1.scope: Deactivated successfully. Feb 13 19:19:19.317468 containerd[1457]: time="2025-02-13T19:19:19.317432805Z" level=info msg="StartContainer for \"946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1\" returns successfully" Feb 13 19:19:19.341134 containerd[1457]: time="2025-02-13T19:19:19.340893217Z" level=info msg="shim disconnected" id=946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1 namespace=k8s.io Feb 13 19:19:19.341134 containerd[1457]: time="2025-02-13T19:19:19.340963939Z" level=warning msg="cleaning up after shim disconnected" id=946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1 namespace=k8s.io Feb 13 19:19:19.341134 containerd[1457]: time="2025-02-13T19:19:19.340974299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:20.046871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-946a43ea45c3af127f7d8175a5d5d1ad27a75ec5bc4058f1e6ddf22e8844a5b1-rootfs.mount: Deactivated successfully. Feb 13 19:19:20.058671 kubelet[2508]: E0213 19:19:20.058629 2508 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:19:20.236332 kubelet[2508]: E0213 19:19:20.236283 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.241820 containerd[1457]: time="2025-02-13T19:19:20.239783484Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:19:20.256642 containerd[1457]: time="2025-02-13T19:19:20.256220058Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b\"" Feb 13 19:19:20.258298 containerd[1457]: time="2025-02-13T19:19:20.257108076Z" level=info msg="StartContainer for \"c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b\"" Feb 13 19:19:20.286581 systemd[1]: Started cri-containerd-c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b.scope - libcontainer container c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b. Feb 13 19:19:20.306626 systemd[1]: cri-containerd-c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b.scope: Deactivated successfully. Feb 13 19:19:20.309401 containerd[1457]: time="2025-02-13T19:19:20.309336738Z" level=info msg="StartContainer for \"c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b\" returns successfully" Feb 13 19:19:20.329317 containerd[1457]: time="2025-02-13T19:19:20.329255423Z" level=info msg="shim disconnected" id=c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b namespace=k8s.io Feb 13 19:19:20.329317 containerd[1457]: time="2025-02-13T19:19:20.329314224Z" level=warning msg="cleaning up after shim disconnected" id=c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b namespace=k8s.io Feb 13 19:19:20.329317 containerd[1457]: time="2025-02-13T19:19:20.329323224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:21.046954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c15bd88fdf1519bcecce2512d67b7ec9aee92234285427eb294c6a3e7c76482b-rootfs.mount: Deactivated successfully. Feb 13 19:19:21.241897 kubelet[2508]: E0213 19:19:21.241839 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:21.245409 containerd[1457]: time="2025-02-13T19:19:21.245073966Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:19:21.266416 containerd[1457]: time="2025-02-13T19:19:21.265702373Z" level=info msg="CreateContainer within sandbox \"5429054ef47bdd33ab498d3b79f01db90a8047fdd69de23a438624a7f0dd6a79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a15de2940e6a7ba7a286a437885d35271c436b1f6e2e7d165ad172f432199fd5\"" Feb 13 19:19:21.267349 containerd[1457]: time="2025-02-13T19:19:21.266758593Z" level=info msg="StartContainer for \"a15de2940e6a7ba7a286a437885d35271c436b1f6e2e7d165ad172f432199fd5\"" Feb 13 19:19:21.298590 systemd[1]: Started cri-containerd-a15de2940e6a7ba7a286a437885d35271c436b1f6e2e7d165ad172f432199fd5.scope - libcontainer container a15de2940e6a7ba7a286a437885d35271c436b1f6e2e7d165ad172f432199fd5. Feb 13 19:19:21.331322 containerd[1457]: time="2025-02-13T19:19:21.331268904Z" level=info msg="StartContainer for \"a15de2940e6a7ba7a286a437885d35271c436b1f6e2e7d165ad172f432199fd5\" returns successfully" Feb 13 19:19:21.605472 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:19:22.247507 kubelet[2508]: E0213 19:19:22.246839 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:23.250610 kubelet[2508]: E0213 19:19:23.250561 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:24.004270 kubelet[2508]: E0213 19:19:24.004176 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:24.514861 systemd-networkd[1392]: lxc_health: Link UP Feb 13 19:19:24.525495 systemd-networkd[1392]: lxc_health: Gained carrier Feb 13 19:19:25.206801 kubelet[2508]: E0213 19:19:25.206758 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.241638 kubelet[2508]: I0213 19:19:25.241576 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-946rw" podStartSLOduration=9.241549094 podStartE2EDuration="9.241549094s" podCreationTimestamp="2025-02-13 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:22.263192736 +0000 UTC m=+87.340247176" watchObservedRunningTime="2025-02-13 19:19:25.241549094 +0000 UTC m=+90.318603534" Feb 13 19:19:25.255715 kubelet[2508]: E0213 19:19:25.255045 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.763516 systemd-networkd[1392]: lxc_health: Gained IPv6LL Feb 13 19:19:26.256614 kubelet[2508]: E0213 19:19:26.256578 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:29.866190 sshd[4352]: Connection closed by 10.0.0.1 port 42356 Feb 13 19:19:29.866686 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:29.869798 systemd[1]: sshd@26-10.0.0.110:22-10.0.0.1:42356.service: Deactivated successfully. Feb 13 19:19:29.871976 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:19:29.872959 systemd-logind[1435]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:19:29.873882 systemd-logind[1435]: Removed session 27.