Feb 13 15:32:57.940553 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:32:57.940575 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:32:57.940585 kernel: KASLR enabled Feb 13 15:32:57.940591 kernel: efi: EFI v2.7 by EDK II Feb 13 15:32:57.940597 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:32:57.940603 kernel: random: crng init done Feb 13 15:32:57.940611 kernel: secureboot: Secure boot disabled Feb 13 15:32:57.940617 kernel: ACPI: Early table checksum verification disabled Feb 13 15:32:57.940623 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:32:57.940631 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:32:57.940637 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940643 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940649 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940655 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940663 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940671 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940678 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940684 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940691 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:57.940697 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:32:57.940704 kernel: NUMA: Failed to initialise from firmware Feb 13 15:32:57.940710 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:32:57.940717 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:32:57.940723 kernel: Zone ranges: Feb 13 15:32:57.940730 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:32:57.940738 kernel: DMA32 empty Feb 13 15:32:57.940745 kernel: Normal empty Feb 13 15:32:57.940751 kernel: Movable zone start for each node Feb 13 15:32:57.940758 kernel: Early memory node ranges Feb 13 15:32:57.940764 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:32:57.940770 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:32:57.940777 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:32:57.940783 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:32:57.940790 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:32:57.940796 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:32:57.940803 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:32:57.940809 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:32:57.940817 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:32:57.940823 kernel: psci: probing for conduit method from ACPI. Feb 13 15:32:57.940830 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:32:57.940839 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:32:57.940846 kernel: psci: Trusted OS migration not required Feb 13 15:32:57.940854 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:32:57.940862 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:32:57.940869 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:32:57.940876 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:32:57.940883 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:32:57.940890 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:32:57.940897 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:32:57.940904 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:32:57.940911 kernel: CPU features: detected: Spectre-v4 Feb 13 15:32:57.940918 kernel: CPU features: detected: Spectre-BHB Feb 13 15:32:57.940925 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:32:57.940933 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:32:57.940940 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:32:57.940947 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:32:57.940954 kernel: alternatives: applying boot alternatives Feb 13 15:32:57.940962 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:32:57.940969 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:32:57.940977 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:32:57.940984 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:32:57.940991 kernel: Fallback order for Node 0: 0 Feb 13 15:32:57.940998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:32:57.941005 kernel: Policy zone: DMA Feb 13 15:32:57.941013 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:32:57.941020 kernel: software IO TLB: area num 4. Feb 13 15:32:57.941027 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:32:57.941034 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Feb 13 15:32:57.941041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:32:57.941048 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:32:57.941056 kernel: rcu: RCU event tracing is enabled. Feb 13 15:32:57.941063 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:32:57.941070 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:32:57.941077 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:32:57.941084 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:32:57.941091 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:32:57.941099 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:32:57.941105 kernel: GICv3: 256 SPIs implemented Feb 13 15:32:57.941112 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:32:57.941119 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:32:57.941126 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:32:57.941133 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:32:57.941139 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:32:57.941154 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:32:57.941162 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:32:57.941169 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:32:57.941176 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:32:57.941184 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:32:57.941191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:32:57.941198 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:32:57.941205 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:32:57.941212 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:32:57.941219 kernel: arm-pv: using stolen time PV Feb 13 15:32:57.941227 kernel: Console: colour dummy device 80x25 Feb 13 15:32:57.941234 kernel: ACPI: Core revision 20230628 Feb 13 15:32:57.941241 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:32:57.941248 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:32:57.941256 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:32:57.941263 kernel: landlock: Up and running. Feb 13 15:32:57.941270 kernel: SELinux: Initializing. Feb 13 15:32:57.941277 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:32:57.941285 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:32:57.941292 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:32:57.941299 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:32:57.941307 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:32:57.941314 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:32:57.941322 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:32:57.941329 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:32:57.941336 kernel: Remapping and enabling EFI services. Feb 13 15:32:57.941343 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:32:57.941351 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:32:57.941358 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:32:57.941365 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:32:57.941372 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:32:57.941380 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:32:57.941387 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:32:57.941414 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:32:57.941423 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:32:57.941434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:32:57.941443 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:32:57.941450 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:32:57.941458 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:32:57.941465 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:32:57.941473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:32:57.941480 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:32:57.941489 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:32:57.941496 kernel: SMP: Total of 4 processors activated. Feb 13 15:32:57.941504 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:32:57.941511 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:32:57.941519 kernel: CPU features: detected: Common not Private translations Feb 13 15:32:57.941527 kernel: CPU features: detected: CRC32 instructions Feb 13 15:32:57.941534 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:32:57.941542 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:32:57.941550 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:32:57.941558 kernel: CPU features: detected: Privileged Access Never Feb 13 15:32:57.941565 kernel: CPU features: detected: RAS Extension Support Feb 13 15:32:57.941573 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:32:57.941581 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:32:57.941588 kernel: alternatives: applying system-wide alternatives Feb 13 15:32:57.941596 kernel: devtmpfs: initialized Feb 13 15:32:57.941604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:32:57.941611 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:32:57.941620 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:32:57.941627 kernel: SMBIOS 3.0.0 present. Feb 13 15:32:57.941635 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:32:57.941642 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:32:57.941650 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:32:57.941657 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:32:57.941665 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:32:57.941673 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:32:57.941680 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Feb 13 15:32:57.941689 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:32:57.941696 kernel: cpuidle: using governor menu Feb 13 15:32:57.941704 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:32:57.941711 kernel: ASID allocator initialised with 32768 entries Feb 13 15:32:57.941719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:32:57.941727 kernel: Serial: AMBA PL011 UART driver Feb 13 15:32:57.941736 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:32:57.941743 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:32:57.941751 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:32:57.941759 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:32:57.941767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:32:57.941775 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:32:57.941782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:32:57.941790 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:32:57.941797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:32:57.941805 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:32:57.941812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:32:57.941820 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:32:57.941829 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:32:57.941836 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:32:57.941843 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:32:57.941851 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:32:57.941858 kernel: ACPI: Interpreter enabled Feb 13 15:32:57.941866 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:32:57.941873 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:32:57.941881 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:32:57.941888 kernel: printk: console [ttyAMA0] enabled Feb 13 15:32:57.941897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:32:57.942023 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:32:57.942095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:32:57.942171 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:32:57.942236 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:32:57.942299 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:32:57.942309 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:32:57.942319 kernel: PCI host bridge to bus 0000:00 Feb 13 15:32:57.942387 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:32:57.942476 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:32:57.942532 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:32:57.942586 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:32:57.942662 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:32:57.942738 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:32:57.942808 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:32:57.942872 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:32:57.942934 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:32:57.942997 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:32:57.943060 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:32:57.943123 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:32:57.943188 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:32:57.943250 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:32:57.943308 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:32:57.943318 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:32:57.943326 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:32:57.943334 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:32:57.943342 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:32:57.943350 kernel: iommu: Default domain type: Translated Feb 13 15:32:57.943357 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:32:57.943372 kernel: efivars: Registered efivars operations Feb 13 15:32:57.943380 kernel: vgaarb: loaded Feb 13 15:32:57.943394 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:32:57.943403 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:32:57.943422 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:32:57.943431 kernel: pnp: PnP ACPI init Feb 13 15:32:57.943519 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:32:57.943531 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:32:57.943544 kernel: NET: Registered PF_INET protocol family Feb 13 15:32:57.943552 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:32:57.943560 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:32:57.943568 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:32:57.943576 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:32:57.943587 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:32:57.943600 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:32:57.943608 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:32:57.943615 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:32:57.943624 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:32:57.943632 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:32:57.943639 kernel: kvm [1]: HYP mode not available Feb 13 15:32:57.943647 kernel: Initialise system trusted keyrings Feb 13 15:32:57.943654 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:32:57.943661 kernel: Key type asymmetric registered Feb 13 15:32:57.943669 kernel: Asymmetric key parser 'x509' registered Feb 13 15:32:57.943676 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:32:57.943684 kernel: io scheduler mq-deadline registered Feb 13 15:32:57.943692 kernel: io scheduler kyber registered Feb 13 15:32:57.943700 kernel: io scheduler bfq registered Feb 13 15:32:57.943707 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:32:57.943715 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:32:57.943722 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:32:57.943792 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:32:57.943803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:32:57.943811 kernel: thunder_xcv, ver 1.0 Feb 13 15:32:57.943818 kernel: thunder_bgx, ver 1.0 Feb 13 15:32:57.943828 kernel: nicpf, ver 1.0 Feb 13 15:32:57.943836 kernel: nicvf, ver 1.0 Feb 13 15:32:57.943909 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:32:57.943974 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:32:57 UTC (1739460777) Feb 13 15:32:57.943984 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:32:57.943992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:32:57.944000 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:32:57.944007 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:32:57.944017 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:32:57.944024 kernel: Segment Routing with IPv6 Feb 13 15:32:57.944032 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:32:57.944040 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:32:57.944048 kernel: Key type dns_resolver registered Feb 13 15:32:57.944055 kernel: registered taskstats version 1 Feb 13 15:32:57.944063 kernel: Loading compiled-in X.509 certificates Feb 13 15:32:57.944071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:32:57.944078 kernel: Key type .fscrypt registered Feb 13 15:32:57.944087 kernel: Key type fscrypt-provisioning registered Feb 13 15:32:57.944095 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:32:57.944103 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:32:57.944111 kernel: ima: No architecture policies found Feb 13 15:32:57.944118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:32:57.944126 kernel: clk: Disabling unused clocks Feb 13 15:32:57.944133 kernel: Freeing unused kernel memory: 39680K Feb 13 15:32:57.944141 kernel: Run /init as init process Feb 13 15:32:57.944157 kernel: with arguments: Feb 13 15:32:57.944168 kernel: /init Feb 13 15:32:57.944175 kernel: with environment: Feb 13 15:32:57.944183 kernel: HOME=/ Feb 13 15:32:57.944190 kernel: TERM=linux Feb 13 15:32:57.944197 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:32:57.944207 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:32:57.944217 systemd[1]: Detected virtualization kvm. Feb 13 15:32:57.944225 systemd[1]: Detected architecture arm64. Feb 13 15:32:57.944234 systemd[1]: Running in initrd. Feb 13 15:32:57.944243 systemd[1]: No hostname configured, using default hostname. Feb 13 15:32:57.944250 systemd[1]: Hostname set to . Feb 13 15:32:57.944259 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:32:57.944267 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:32:57.944276 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:57.944284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:57.944293 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:32:57.944304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:32:57.944313 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:32:57.944321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:32:57.944331 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:32:57.944339 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:32:57.944348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:57.944356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:57.944365 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:32:57.944374 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:32:57.944382 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:32:57.944400 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:32:57.944409 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:32:57.944417 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:32:57.944426 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:32:57.944434 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:32:57.944445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:57.944453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:57.944462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:57.944470 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:32:57.944479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:32:57.944487 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:32:57.944496 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:32:57.944504 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:32:57.944512 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:32:57.944522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:32:57.944531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:57.944539 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:32:57.944547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:57.944556 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:32:57.944565 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:32:57.944593 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:32:57.944613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:57.944624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:57.944633 systemd-journald[239]: Journal started Feb 13 15:32:57.944653 systemd-journald[239]: Runtime Journal (/run/log/journal/587361fca03046178e01fbb7da238edc) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:32:57.931261 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:32:57.947898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:32:57.947914 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:32:57.949006 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:32:57.949994 kernel: Bridge firewalling registered Feb 13 15:32:57.949923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:57.956530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:57.957985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:57.959494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:32:57.961551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:32:57.968664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:57.972415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:57.973406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:57.983576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:32:57.984572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:57.987333 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:32:58.000450 dracut-cmdline[280]: dracut-dracut-053 Feb 13 15:32:58.003002 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:32:58.009854 systemd-resolved[277]: Positive Trust Anchors: Feb 13 15:32:58.011698 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:32:58.011734 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:32:58.016375 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 15:32:58.017358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:32:58.019963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:58.077426 kernel: SCSI subsystem initialized Feb 13 15:32:58.082408 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:32:58.089419 kernel: iscsi: registered transport (tcp) Feb 13 15:32:58.102427 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:32:58.102448 kernel: QLogic iSCSI HBA Driver Feb 13 15:32:58.144403 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:32:58.151566 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:32:58.168591 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:32:58.168653 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:32:58.169554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:32:58.216416 kernel: raid6: neonx8 gen() 15769 MB/s Feb 13 15:32:58.233406 kernel: raid6: neonx4 gen() 15555 MB/s Feb 13 15:32:58.250403 kernel: raid6: neonx2 gen() 13201 MB/s Feb 13 15:32:58.267404 kernel: raid6: neonx1 gen() 10485 MB/s Feb 13 15:32:58.284405 kernel: raid6: int64x8 gen() 6960 MB/s Feb 13 15:32:58.301406 kernel: raid6: int64x4 gen() 7283 MB/s Feb 13 15:32:58.318410 kernel: raid6: int64x2 gen() 6036 MB/s Feb 13 15:32:58.335404 kernel: raid6: int64x1 gen() 5037 MB/s Feb 13 15:32:58.335428 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Feb 13 15:32:58.352402 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Feb 13 15:32:58.352421 kernel: raid6: using neon recovery algorithm Feb 13 15:32:58.357404 kernel: xor: measuring software checksum speed Feb 13 15:32:58.357425 kernel: 8regs : 19783 MB/sec Feb 13 15:32:58.358849 kernel: 32regs : 18444 MB/sec Feb 13 15:32:58.358863 kernel: arm64_neon : 27079 MB/sec Feb 13 15:32:58.358872 kernel: xor: using function: arm64_neon (27079 MB/sec) Feb 13 15:32:58.410427 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:32:58.420674 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:32:58.435601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:58.446754 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 15:32:58.449985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:58.453292 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:32:58.467334 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 15:32:58.493410 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:32:58.500555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:32:58.539181 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:58.547600 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:32:58.560884 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:32:58.562210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:32:58.564061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:58.566136 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:32:58.576728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:32:58.587181 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:32:58.592111 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:32:58.592220 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:32:58.592233 kernel: GPT:9289727 != 19775487 Feb 13 15:32:58.592243 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:32:58.592253 kernel: GPT:9289727 != 19775487 Feb 13 15:32:58.592263 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:32:58.592281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:58.588813 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:32:58.590721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:32:58.590810 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:58.597156 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:58.598295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:58.598447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:58.600579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:58.610410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 15:32:58.612412 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (518) Feb 13 15:32:58.616648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:58.625784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:32:58.630438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:58.638193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:32:58.643379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:32:58.646861 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:32:58.647757 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:32:58.661547 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:32:58.663284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:58.668706 disk-uuid[553]: Primary Header is updated. Feb 13 15:32:58.668706 disk-uuid[553]: Secondary Entries is updated. Feb 13 15:32:58.668706 disk-uuid[553]: Secondary Header is updated. Feb 13 15:32:58.672429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:58.683148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:59.682436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:59.682494 disk-uuid[555]: The operation has completed successfully. Feb 13 15:32:59.706739 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:32:59.706853 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:32:59.722543 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:32:59.725396 sh[575]: Success Feb 13 15:32:59.741446 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:32:59.777536 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:32:59.787795 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:32:59.789479 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:32:59.799925 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:32:59.799974 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:59.799985 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:32:59.799996 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:32:59.801404 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:32:59.804148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:32:59.805332 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:32:59.816568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:32:59.818192 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:32:59.825702 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:32:59.825750 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:59.825768 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:32:59.827411 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:32:59.834868 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:32:59.836525 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:32:59.842259 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:32:59.847652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:32:59.915304 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:32:59.929574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:32:59.944510 ignition[664]: Ignition 2.20.0 Feb 13 15:32:59.944520 ignition[664]: Stage: fetch-offline Feb 13 15:32:59.944556 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:59.944564 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:59.944781 ignition[664]: parsed url from cmdline: "" Feb 13 15:32:59.944786 ignition[664]: no config URL provided Feb 13 15:32:59.944791 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:32:59.944798 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:32:59.944825 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 13 15:32:59.944830 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:32:59.955832 systemd-networkd[768]: lo: Link UP Feb 13 15:32:59.955843 systemd-networkd[768]: lo: Gained carrier Feb 13 15:32:59.956690 systemd-networkd[768]: Enumeration completed Feb 13 15:32:59.957407 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:59.957410 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:59.957819 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 13 15:32:59.957865 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:32:59.957841 ignition[664]: QEMU firmware config was not found. Ignoring... Feb 13 15:32:59.958546 systemd-networkd[768]: eth0: Link UP Feb 13 15:32:59.958550 systemd-networkd[768]: eth0: Gained carrier Feb 13 15:32:59.958558 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:59.961460 systemd[1]: Reached target network.target - Network. Feb 13 15:32:59.975439 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:33:00.018217 ignition[664]: parsing config with SHA512: 00aeef4e1afb92fff391e84880ceaac5eb6690aa6c336064aa280fd414c9973837bf85bb51448ad3e7de96a65c0baeb410e54874fa291606455b25409480c6e6 Feb 13 15:33:00.024443 unknown[664]: fetched base config from "system" Feb 13 15:33:00.024462 unknown[664]: fetched user config from "qemu" Feb 13 15:33:00.025998 ignition[664]: fetch-offline: fetch-offline passed Feb 13 15:33:00.026098 ignition[664]: Ignition finished successfully Feb 13 15:33:00.028287 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:33:00.029432 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:33:00.039632 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:33:00.050814 ignition[775]: Ignition 2.20.0 Feb 13 15:33:00.050826 ignition[775]: Stage: kargs Feb 13 15:33:00.050988 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:00.050998 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:00.051873 ignition[775]: kargs: kargs passed Feb 13 15:33:00.051921 ignition[775]: Ignition finished successfully Feb 13 15:33:00.054128 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:33:00.061576 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:33:00.072323 ignition[784]: Ignition 2.20.0 Feb 13 15:33:00.072333 ignition[784]: Stage: disks Feb 13 15:33:00.072517 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:00.072526 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:00.073417 ignition[784]: disks: disks passed Feb 13 15:33:00.073464 ignition[784]: Ignition finished successfully Feb 13 15:33:00.078442 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:33:00.079361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:33:00.080618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:33:00.081477 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:33:00.082215 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:33:00.082967 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:33:00.090578 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:33:00.101751 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:33:00.106905 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:33:00.114717 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:33:00.157415 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:33:00.158101 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:33:00.159313 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:33:00.180524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:33:00.182264 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:33:00.183128 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:33:00.183185 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:33:00.183208 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:33:00.190527 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:33:00.196268 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Feb 13 15:33:00.196299 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:33:00.196310 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:33:00.196328 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:33:00.194076 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:33:00.201646 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:33:00.203445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:33:00.250292 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:33:00.255971 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:33:00.262347 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:33:00.268435 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:33:00.340094 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:33:00.349512 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:33:00.351482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:33:00.358407 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:33:00.384447 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:33:00.388282 ignition[916]: INFO : Ignition 2.20.0 Feb 13 15:33:00.388282 ignition[916]: INFO : Stage: mount Feb 13 15:33:00.388282 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:00.388282 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:00.388282 ignition[916]: INFO : mount: mount passed Feb 13 15:33:00.388282 ignition[916]: INFO : Ignition finished successfully Feb 13 15:33:00.389859 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:33:00.397510 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:33:00.798657 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:33:00.813598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:33:00.819591 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 15:33:00.819626 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:33:00.821049 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:33:00.821072 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:33:00.823408 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:33:00.824793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:33:00.842692 ignition[946]: INFO : Ignition 2.20.0 Feb 13 15:33:00.842692 ignition[946]: INFO : Stage: files Feb 13 15:33:00.844195 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:00.844195 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:00.844195 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:33:00.847296 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:33:00.847296 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:33:00.849433 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:33:00.849433 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:33:00.849433 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:33:00.849433 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:33:00.849433 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:33:00.848142 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 15:33:00.944668 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:33:01.783593 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 15:33:02.028356 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:33:02.030118 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:33:02.030118 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:33:02.337907 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:33:02.408017 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:33:02.409685 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:33:02.653485 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:33:02.929149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:33:02.929149 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:33:02.931998 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:33:02.958043 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:33:02.961429 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:33:02.961429 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:33:02.961429 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:33:02.966094 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:33:02.966094 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:33:02.966094 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:33:02.966094 ignition[946]: INFO : files: files passed Feb 13 15:33:02.966094 ignition[946]: INFO : Ignition finished successfully Feb 13 15:33:02.963034 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:33:02.981583 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:33:02.983426 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:33:02.985820 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:33:02.987285 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:33:02.990983 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:33:02.993798 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:02.993798 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:02.996342 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:02.997457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:33:02.998743 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:33:03.007521 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:33:03.025688 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:33:03.025787 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:33:03.027417 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:33:03.028828 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:33:03.030096 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:33:03.030830 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:33:03.044866 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:33:03.050525 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:33:03.057704 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:33:03.058660 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:33:03.060367 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:33:03.061592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:33:03.061708 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:33:03.063815 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:33:03.065213 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:33:03.066449 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:33:03.067756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:33:03.069326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:33:03.071052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:33:03.072433 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:33:03.073863 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:33:03.075384 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:33:03.076662 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:33:03.077800 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:33:03.077915 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:33:03.079656 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:33:03.081058 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:33:03.082514 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:33:03.084070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:33:03.085206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:33:03.085315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:33:03.087465 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:33:03.087578 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:33:03.089060 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:33:03.090181 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:33:03.094465 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:33:03.096503 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:33:03.097226 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:33:03.098475 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:33:03.098562 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:33:03.099845 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:33:03.099922 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:33:03.101039 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:33:03.101151 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:33:03.102401 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:33:03.102501 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:33:03.120568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:33:03.121224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:33:03.121343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:33:03.126606 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:33:03.127233 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:33:03.127349 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:33:03.128665 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:33:03.131370 ignition[1001]: INFO : Ignition 2.20.0 Feb 13 15:33:03.131370 ignition[1001]: INFO : Stage: umount Feb 13 15:33:03.128763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:33:03.133368 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:03.133368 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:03.133368 ignition[1001]: INFO : umount: umount passed Feb 13 15:33:03.133368 ignition[1001]: INFO : Ignition finished successfully Feb 13 15:33:03.133432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:33:03.134467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:33:03.136598 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:33:03.136671 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:33:03.138368 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:33:03.139358 systemd[1]: Stopped target network.target - Network. Feb 13 15:33:03.140657 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:33:03.140717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:33:03.142692 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:33:03.142737 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:33:03.144028 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:33:03.144068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:33:03.145532 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:33:03.145582 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:33:03.147050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:33:03.148222 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:33:03.149826 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:33:03.149915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:33:03.151604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:33:03.151689 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:33:03.159355 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:33:03.159500 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:33:03.160448 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 15:33:03.162294 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:33:03.162388 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:33:03.163731 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:33:03.163777 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:33:03.172510 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:33:03.173166 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:33:03.173220 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:33:03.174823 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:33:03.174870 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:03.176267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:33:03.176308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:33:03.177906 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:33:03.177947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:33:03.179435 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:33:03.189671 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:33:03.190733 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:33:03.192486 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:33:03.192623 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:33:03.193878 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:33:03.193916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:33:03.196111 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:33:03.196160 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:33:03.197826 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:33:03.197874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:33:03.203954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:33:03.204028 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:33:03.206582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:33:03.206638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:33:03.218590 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:33:03.219346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:33:03.219417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:33:03.221277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:33:03.221323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:03.223498 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:33:03.225415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:33:03.226593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:33:03.228735 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:33:03.237618 systemd[1]: Switching root. Feb 13 15:33:03.261040 systemd-journald[239]: Journal stopped Feb 13 15:33:04.149620 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:33:04.149675 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:33:04.149687 kernel: SELinux: policy capability open_perms=1 Feb 13 15:33:04.149697 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:33:04.149709 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:33:04.149718 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:33:04.149727 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:33:04.149740 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:33:04.149754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:33:04.149764 kernel: audit: type=1403 audit(1739460783.529:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:33:04.149776 systemd[1]: Successfully loaded SELinux policy in 32.982ms. Feb 13 15:33:04.149793 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.501ms. Feb 13 15:33:04.149805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:33:04.149815 systemd[1]: Detected virtualization kvm. Feb 13 15:33:04.149826 systemd[1]: Detected architecture arm64. Feb 13 15:33:04.149838 systemd[1]: Detected first boot. Feb 13 15:33:04.149848 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:33:04.149858 zram_generator::config[1045]: No configuration found. Feb 13 15:33:04.149869 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:33:04.149879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:33:04.149889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:33:04.149900 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:33:04.149911 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:33:04.149921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:33:04.149933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:33:04.149945 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:33:04.149955 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:33:04.149965 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:33:04.149976 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:33:04.149986 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:33:04.149997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:33:04.150007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:33:04.150019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:33:04.150034 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:33:04.150044 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:33:04.150055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:33:04.150065 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:33:04.150075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:33:04.150086 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:33:04.150097 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:33:04.150107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:33:04.150127 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:33:04.150138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:33:04.150149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:33:04.150159 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:33:04.150169 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:33:04.150180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:33:04.150191 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:33:04.150201 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:33:04.150213 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:33:04.150224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:33:04.150234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:33:04.150244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:33:04.150254 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:33:04.150265 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:33:04.150275 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:33:04.150285 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:33:04.150296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:33:04.150308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:33:04.150318 systemd[1]: Reached target machines.target - Containers. Feb 13 15:33:04.150328 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:33:04.150339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:04.150349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:33:04.150360 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:33:04.150370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:04.150380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:33:04.150398 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:04.150410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:33:04.150421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:04.150432 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:33:04.150442 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:33:04.150453 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:33:04.150463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:33:04.150473 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:33:04.150483 kernel: loop: module loaded Feb 13 15:33:04.150495 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:33:04.150505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:33:04.150515 kernel: ACPI: bus type drm_connector registered Feb 13 15:33:04.150525 kernel: fuse: init (API version 7.39) Feb 13 15:33:04.150534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:33:04.150545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:33:04.150555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:33:04.150565 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:33:04.150576 systemd[1]: Stopped verity-setup.service. Feb 13 15:33:04.150604 systemd-journald[1114]: Collecting audit messages is disabled. Feb 13 15:33:04.150626 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:33:04.150636 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:33:04.150647 systemd-journald[1114]: Journal started Feb 13 15:33:04.150669 systemd-journald[1114]: Runtime Journal (/run/log/journal/587361fca03046178e01fbb7da238edc) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:33:03.964357 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:33:03.983872 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:33:03.984276 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:33:04.153901 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:33:04.154554 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:33:04.155763 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:33:04.156796 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:33:04.157828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:33:04.160446 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:33:04.161720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:33:04.163040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:33:04.163193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:33:04.164777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:04.164909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:04.166763 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:33:04.166915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:33:04.168279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:04.168443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:04.169670 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:33:04.169812 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:33:04.171198 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:04.171328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:04.173878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:33:04.175035 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:33:04.176708 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:33:04.188610 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:33:04.197490 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:33:04.200564 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:33:04.201466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:33:04.201497 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:33:04.203582 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:33:04.205742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:33:04.209305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:33:04.210280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:04.212179 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:33:04.215377 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:33:04.216477 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:33:04.217446 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:33:04.221685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:33:04.227447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:04.229072 systemd-journald[1114]: Time spent on flushing to /var/log/journal/587361fca03046178e01fbb7da238edc is 25.464ms for 858 entries. Feb 13 15:33:04.229072 systemd-journald[1114]: System Journal (/var/log/journal/587361fca03046178e01fbb7da238edc) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:33:04.266602 systemd-journald[1114]: Received client request to flush runtime journal. Feb 13 15:33:04.266655 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 15:33:04.231008 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:33:04.236137 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:33:04.238478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:33:04.239895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:33:04.242599 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:33:04.243677 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:33:04.245104 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:33:04.252547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:33:04.262621 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:33:04.265461 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:33:04.276405 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:33:04.273123 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:33:04.277241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:04.283831 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:33:04.291649 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:33:04.297590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:33:04.299163 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:33:04.299925 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:33:04.303418 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:33:04.330123 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:33:04.330141 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:33:04.334471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:33:04.345452 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 15:33:04.381456 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:33:04.389434 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:33:04.398492 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:33:04.404576 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:33:04.405030 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 15:33:04.410897 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:33:04.410912 systemd[1]: Reloading... Feb 13 15:33:04.482462 zram_generator::config[1206]: No configuration found. Feb 13 15:33:04.566320 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:33:04.603478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:04.639257 systemd[1]: Reloading finished in 227 ms. Feb 13 15:33:04.670897 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:33:04.672351 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:33:04.683671 systemd[1]: Starting ensure-sysext.service... Feb 13 15:33:04.685551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:33:04.694436 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:33:04.694452 systemd[1]: Reloading... Feb 13 15:33:04.705384 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:33:04.705710 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:33:04.706348 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:33:04.706613 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 15:33:04.706676 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 15:33:04.709811 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:33:04.709823 systemd-tmpfiles[1241]: Skipping /boot Feb 13 15:33:04.717017 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:33:04.717035 systemd-tmpfiles[1241]: Skipping /boot Feb 13 15:33:04.742413 zram_generator::config[1265]: No configuration found. Feb 13 15:33:04.829873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:04.865835 systemd[1]: Reloading finished in 171 ms. Feb 13 15:33:04.879634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:33:04.892861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:33:04.905926 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:33:04.908569 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:33:04.913934 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:33:04.919912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:33:04.923214 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:33:04.933727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:33:04.937243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:04.940462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:04.944694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:04.951203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:04.952404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:04.953278 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:33:04.955666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:04.956102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:04.958135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:04.958289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:04.959879 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:04.960052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:04.969040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:04.969743 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Feb 13 15:33:04.978827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:04.981487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:04.984076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:04.985002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:04.988681 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:33:04.993656 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:33:04.996205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:33:04.998211 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:33:05.000798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:05.000953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:05.002318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:05.002492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:05.003798 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:05.003916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:05.005319 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:33:05.017527 systemd[1]: Finished ensure-sysext.service. Feb 13 15:33:05.018596 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:33:05.027754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:05.039673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:05.043465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:33:05.045761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:05.047967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:05.049030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:05.052472 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:33:05.053742 augenrules[1371]: No rules Feb 13 15:33:05.058591 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:33:05.060000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:33:05.060578 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:33:05.060757 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:33:05.061799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:05.061926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:05.065423 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:33:05.094129 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:33:05.094304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:33:05.095631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:05.095787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:05.096998 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:05.097153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:05.102678 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1361) Feb 13 15:33:05.110060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:33:05.110182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:33:05.132183 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:33:05.159124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:33:05.177326 systemd-resolved[1313]: Positive Trust Anchors: Feb 13 15:33:05.177480 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:33:05.177513 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:33:05.178651 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:33:05.179728 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:33:05.180820 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:33:05.183974 systemd-networkd[1373]: lo: Link UP Feb 13 15:33:05.183978 systemd-networkd[1373]: lo: Gained carrier Feb 13 15:33:05.185539 systemd-networkd[1373]: Enumeration completed Feb 13 15:33:05.185703 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:33:05.188205 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:33:05.190068 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:05.190168 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:33:05.191118 systemd-networkd[1373]: eth0: Link UP Feb 13 15:33:05.191195 systemd-networkd[1373]: eth0: Gained carrier Feb 13 15:33:05.191261 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:05.193056 systemd-resolved[1313]: Defaulting to hostname 'linux'. Feb 13 15:33:05.194616 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:33:05.195592 systemd[1]: Reached target network.target - Network. Feb 13 15:33:05.196545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:33:05.214025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:33:05.222471 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:33:05.223074 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Feb 13 15:33:05.223742 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:33:05.223798 systemd-timesyncd[1379]: Initial clock synchronization to Thu 2025-02-13 15:33:05.562068 UTC. Feb 13 15:33:05.237682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:33:05.243752 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:33:05.246368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:33:05.272425 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:33:05.280967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:05.311059 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:33:05.312467 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:33:05.313444 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:33:05.314404 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:33:05.315417 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:33:05.316613 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:33:05.317527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:33:05.318533 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:33:05.319486 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:33:05.319521 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:33:05.320267 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:33:05.322466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:33:05.324895 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:33:05.333529 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:33:05.335873 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:33:05.337276 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:33:05.338565 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:33:05.339505 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:33:05.340277 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:33:05.340311 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:33:05.341351 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:33:05.346463 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:33:05.343345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:33:05.347768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:33:05.351614 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:33:05.352441 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:33:05.353488 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:33:05.354029 jq[1418]: false Feb 13 15:33:05.357647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:33:05.360102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:33:05.363570 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:33:05.368628 dbus-daemon[1417]: [system] SELinux support is enabled Feb 13 15:33:05.369567 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:33:05.369920 extend-filesystems[1419]: Found loop3 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found loop4 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found loop5 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda1 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda2 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda3 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found usr Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda4 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda6 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda7 Feb 13 15:33:05.372755 extend-filesystems[1419]: Found vda9 Feb 13 15:33:05.372755 extend-filesystems[1419]: Checking size of /dev/vda9 Feb 13 15:33:05.371497 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:33:05.388546 extend-filesystems[1419]: Resized partition /dev/vda9 Feb 13 15:33:05.372843 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:33:05.373544 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:33:05.377692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:33:05.381682 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:33:05.388464 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:33:05.390918 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:33:05.393453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:33:05.393745 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:33:05.393908 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:33:05.397377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:33:05.397540 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:33:05.406266 jq[1435]: true Feb 13 15:33:05.409497 extend-filesystems[1440]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:33:05.412587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1357) Feb 13 15:33:05.412647 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:33:05.428154 jq[1450]: true Feb 13 15:33:05.428532 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:33:05.436145 tar[1442]: linux-arm64/helm Feb 13 15:33:05.455023 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:33:05.443051 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:33:05.443077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:33:05.445493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:33:05.445518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:33:05.455931 extend-filesystems[1440]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:33:05.455931 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:33:05.455931 extend-filesystems[1440]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:33:05.461629 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Feb 13 15:33:05.457199 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:33:05.457387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:33:05.478760 update_engine[1432]: I20250213 15:33:05.477851 1432 main.cc:92] Flatcar Update Engine starting Feb 13 15:33:05.483405 update_engine[1432]: I20250213 15:33:05.480506 1432 update_check_scheduler.cc:74] Next update check in 9m55s Feb 13 15:33:05.482220 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:33:05.489861 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:33:05.504985 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:33:05.510352 systemd-logind[1427]: New seat seat0. Feb 13 15:33:05.512493 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:33:05.520834 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:33:05.522643 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:33:05.529092 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:33:05.571300 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:33:05.658588 containerd[1452]: time="2025-02-13T15:33:05.658026840Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:33:05.707298 containerd[1452]: time="2025-02-13T15:33:05.706831880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.708525 containerd[1452]: time="2025-02-13T15:33:05.708485000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708617480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708643040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708816680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708835160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708894320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.708920400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709099560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709129080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709141560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709150000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709225840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.709737 containerd[1452]: time="2025-02-13T15:33:05.709440160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:05.710018 containerd[1452]: time="2025-02-13T15:33:05.709539680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:05.710018 containerd[1452]: time="2025-02-13T15:33:05.709552720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:33:05.710018 containerd[1452]: time="2025-02-13T15:33:05.709624560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:33:05.710018 containerd[1452]: time="2025-02-13T15:33:05.709662520Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:33:05.716892 containerd[1452]: time="2025-02-13T15:33:05.716850080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:33:05.717041 containerd[1452]: time="2025-02-13T15:33:05.717021200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:33:05.717094 containerd[1452]: time="2025-02-13T15:33:05.717047720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:33:05.717203 containerd[1452]: time="2025-02-13T15:33:05.717172880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:33:05.717233 containerd[1452]: time="2025-02-13T15:33:05.717205600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:33:05.717601 containerd[1452]: time="2025-02-13T15:33:05.717570200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:33:05.718096 containerd[1452]: time="2025-02-13T15:33:05.718066320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:33:05.718516 containerd[1452]: time="2025-02-13T15:33:05.718491880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:33:05.718558 containerd[1452]: time="2025-02-13T15:33:05.718522040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:33:05.718558 containerd[1452]: time="2025-02-13T15:33:05.718537600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:33:05.718593 containerd[1452]: time="2025-02-13T15:33:05.718552640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718593 containerd[1452]: time="2025-02-13T15:33:05.718574840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718593 containerd[1452]: time="2025-02-13T15:33:05.718587480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718641 containerd[1452]: time="2025-02-13T15:33:05.718601880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718641 containerd[1452]: time="2025-02-13T15:33:05.718616800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718795 containerd[1452]: time="2025-02-13T15:33:05.718773520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718816 containerd[1452]: time="2025-02-13T15:33:05.718801240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718834 containerd[1452]: time="2025-02-13T15:33:05.718821440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:33:05.718868 containerd[1452]: time="2025-02-13T15:33:05.718855200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.718887 containerd[1452]: time="2025-02-13T15:33:05.718876440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.718909 containerd[1452]: time="2025-02-13T15:33:05.718890360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.718909 containerd[1452]: time="2025-02-13T15:33:05.718903520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719012 containerd[1452]: time="2025-02-13T15:33:05.718994920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719031 containerd[1452]: time="2025-02-13T15:33:05.719018680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719052 containerd[1452]: time="2025-02-13T15:33:05.719032320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719071 containerd[1452]: time="2025-02-13T15:33:05.719060240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719092 containerd[1452]: time="2025-02-13T15:33:05.719076640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719116 containerd[1452]: time="2025-02-13T15:33:05.719092480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719116 containerd[1452]: time="2025-02-13T15:33:05.719112080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719219 containerd[1452]: time="2025-02-13T15:33:05.719126320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719250 containerd[1452]: time="2025-02-13T15:33:05.719238120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719274 containerd[1452]: time="2025-02-13T15:33:05.719258160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:33:05.719314 containerd[1452]: time="2025-02-13T15:33:05.719302200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719334 containerd[1452]: time="2025-02-13T15:33:05.719320680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719352 containerd[1452]: time="2025-02-13T15:33:05.719334000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:33:05.719816 containerd[1452]: time="2025-02-13T15:33:05.719793000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:33:05.719841 containerd[1452]: time="2025-02-13T15:33:05.719828440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:33:05.719873 containerd[1452]: time="2025-02-13T15:33:05.719841800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:33:05.719893 containerd[1452]: time="2025-02-13T15:33:05.719872680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:33:05.719893 containerd[1452]: time="2025-02-13T15:33:05.719884280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.719926 containerd[1452]: time="2025-02-13T15:33:05.719896920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:33:05.719926 containerd[1452]: time="2025-02-13T15:33:05.719907360Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:33:05.719926 containerd[1452]: time="2025-02-13T15:33:05.719917600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:33:05.720701 containerd[1452]: time="2025-02-13T15:33:05.720646040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:33:05.720823 containerd[1452]: time="2025-02-13T15:33:05.720722720Z" level=info msg="Connect containerd service" Feb 13 15:33:05.720823 containerd[1452]: time="2025-02-13T15:33:05.720766880Z" level=info msg="using legacy CRI server" Feb 13 15:33:05.720880 containerd[1452]: time="2025-02-13T15:33:05.720856680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:33:05.721143 containerd[1452]: time="2025-02-13T15:33:05.721126360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:33:05.721854 containerd[1452]: time="2025-02-13T15:33:05.721821800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:33:05.722408 containerd[1452]: time="2025-02-13T15:33:05.722358920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:33:05.722453 containerd[1452]: time="2025-02-13T15:33:05.722422000Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:33:05.722554 containerd[1452]: time="2025-02-13T15:33:05.722528200Z" level=info msg="Start subscribing containerd event" Feb 13 15:33:05.722585 containerd[1452]: time="2025-02-13T15:33:05.722569560Z" level=info msg="Start recovering state" Feb 13 15:33:05.722675 containerd[1452]: time="2025-02-13T15:33:05.722628960Z" level=info msg="Start event monitor" Feb 13 15:33:05.722675 containerd[1452]: time="2025-02-13T15:33:05.722642760Z" level=info msg="Start snapshots syncer" Feb 13 15:33:05.722675 containerd[1452]: time="2025-02-13T15:33:05.722652160Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:33:05.722675 containerd[1452]: time="2025-02-13T15:33:05.722658560Z" level=info msg="Start streaming server" Feb 13 15:33:05.723760 containerd[1452]: time="2025-02-13T15:33:05.722774360Z" level=info msg="containerd successfully booted in 0.066978s" Feb 13 15:33:05.722861 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:33:05.808881 tar[1442]: linux-arm64/LICENSE Feb 13 15:33:05.808982 tar[1442]: linux-arm64/README.md Feb 13 15:33:05.822504 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:33:06.050869 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:33:06.070190 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:33:06.089806 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:33:06.097114 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:33:06.098476 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:33:06.101426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:33:06.115094 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:33:06.118213 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:33:06.120229 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:33:06.121341 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:33:06.907325 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 15:33:06.913811 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:33:06.915613 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:33:06.926760 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:33:06.929365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:06.931687 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:33:06.947396 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:33:06.947654 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:33:06.948911 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:33:06.962844 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:33:07.430669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:07.432386 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:33:07.434498 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:07.439202 systemd[1]: Startup finished in 629ms (kernel) + 5.784s (initrd) + 3.944s (userspace) = 10.358s. Feb 13 15:33:07.974187 kubelet[1530]: E0213 15:33:07.974049 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:07.977214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:07.977384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:10.402298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:33:10.403638 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:57982.service - OpenSSH per-connection server daemon (10.0.0.1:57982). Feb 13 15:33:10.481628 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 57982 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:10.484017 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:10.494654 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:33:10.513704 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:33:10.515859 systemd-logind[1427]: New session 1 of user core. Feb 13 15:33:10.524574 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:33:10.527301 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:33:10.535715 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:33:10.616081 systemd[1548]: Queued start job for default target default.target. Feb 13 15:33:10.630521 systemd[1548]: Created slice app.slice - User Application Slice. Feb 13 15:33:10.630612 systemd[1548]: Reached target paths.target - Paths. Feb 13 15:33:10.630625 systemd[1548]: Reached target timers.target - Timers. Feb 13 15:33:10.631997 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:33:10.643620 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:33:10.643744 systemd[1548]: Reached target sockets.target - Sockets. Feb 13 15:33:10.643761 systemd[1548]: Reached target basic.target - Basic System. Feb 13 15:33:10.643800 systemd[1548]: Reached target default.target - Main User Target. Feb 13 15:33:10.643829 systemd[1548]: Startup finished in 101ms. Feb 13 15:33:10.644274 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:33:10.646165 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:33:10.712812 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:57994.service - OpenSSH per-connection server daemon (10.0.0.1:57994). Feb 13 15:33:10.770239 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:10.771952 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:10.776744 systemd-logind[1427]: New session 2 of user core. Feb 13 15:33:10.789622 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:33:10.843472 sshd[1561]: Connection closed by 10.0.0.1 port 57994 Feb 13 15:33:10.844033 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:10.853033 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:57994.service: Deactivated successfully. Feb 13 15:33:10.854548 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:33:10.856667 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:33:10.867902 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Feb 13 15:33:10.868941 systemd-logind[1427]: Removed session 2. Feb 13 15:33:10.907446 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:10.908838 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:10.912689 systemd-logind[1427]: New session 3 of user core. Feb 13 15:33:10.923616 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:33:10.973601 sshd[1568]: Connection closed by 10.0.0.1 port 57996 Feb 13 15:33:10.973900 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:10.991043 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:57996.service: Deactivated successfully. Feb 13 15:33:10.994572 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:33:10.995955 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:33:11.002707 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:58012.service - OpenSSH per-connection server daemon (10.0.0.1:58012). Feb 13 15:33:11.003655 systemd-logind[1427]: Removed session 3. Feb 13 15:33:11.040937 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 58012 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:11.042092 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:11.045622 systemd-logind[1427]: New session 4 of user core. Feb 13 15:33:11.054568 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:33:11.106965 sshd[1575]: Connection closed by 10.0.0.1 port 58012 Feb 13 15:33:11.107273 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:11.120905 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:58012.service: Deactivated successfully. Feb 13 15:33:11.122456 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:33:11.123855 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:33:11.124957 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Feb 13 15:33:11.125637 systemd-logind[1427]: Removed session 4. Feb 13 15:33:11.168298 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:11.169616 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:11.173442 systemd-logind[1427]: New session 5 of user core. Feb 13 15:33:11.184602 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:33:11.258566 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:33:11.260811 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:11.275367 sudo[1583]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:11.276883 sshd[1582]: Connection closed by 10.0.0.1 port 58028 Feb 13 15:33:11.277294 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:11.297075 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:58028.service: Deactivated successfully. Feb 13 15:33:11.298660 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:33:11.300266 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:33:11.301941 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:58032.service - OpenSSH per-connection server daemon (10.0.0.1:58032). Feb 13 15:33:11.302644 systemd-logind[1427]: Removed session 5. Feb 13 15:33:11.346807 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 58032 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:11.348099 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:11.352563 systemd-logind[1427]: New session 6 of user core. Feb 13 15:33:11.361583 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:33:11.413244 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:33:11.413573 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:11.417254 sudo[1592]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:11.422502 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:33:11.422785 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:11.440729 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:33:11.465464 augenrules[1614]: No rules Feb 13 15:33:11.466782 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:33:11.468471 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:33:11.469908 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:11.471516 sshd[1590]: Connection closed by 10.0.0.1 port 58032 Feb 13 15:33:11.472050 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:11.483021 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:58032.service: Deactivated successfully. Feb 13 15:33:11.484398 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:33:11.486703 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:33:11.487900 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:58040.service - OpenSSH per-connection server daemon (10.0.0.1:58040). Feb 13 15:33:11.488645 systemd-logind[1427]: Removed session 6. Feb 13 15:33:11.530350 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 58040 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:33:11.531531 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:11.535136 systemd-logind[1427]: New session 7 of user core. Feb 13 15:33:11.546605 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:33:11.597612 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:33:11.597887 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:11.954766 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:33:11.954831 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:33:12.232820 dockerd[1646]: time="2025-02-13T15:33:12.232534128Z" level=info msg="Starting up" Feb 13 15:33:12.451982 dockerd[1646]: time="2025-02-13T15:33:12.451928855Z" level=info msg="Loading containers: start." Feb 13 15:33:12.597426 kernel: Initializing XFRM netlink socket Feb 13 15:33:12.668607 systemd-networkd[1373]: docker0: Link UP Feb 13 15:33:12.707702 dockerd[1646]: time="2025-02-13T15:33:12.707662977Z" level=info msg="Loading containers: done." Feb 13 15:33:12.720901 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1491094654-merged.mount: Deactivated successfully. Feb 13 15:33:12.723997 dockerd[1646]: time="2025-02-13T15:33:12.723948920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:33:12.724088 dockerd[1646]: time="2025-02-13T15:33:12.724039161Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:33:12.724155 dockerd[1646]: time="2025-02-13T15:33:12.724136617Z" level=info msg="Daemon has completed initialization" Feb 13 15:33:12.751589 dockerd[1646]: time="2025-02-13T15:33:12.751092877Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:33:12.751272 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:33:13.428574 containerd[1452]: time="2025-02-13T15:33:13.428479760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:33:14.084549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691007041.mount: Deactivated successfully. Feb 13 15:33:15.071425 containerd[1452]: time="2025-02-13T15:33:15.071359156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:15.072623 containerd[1452]: time="2025-02-13T15:33:15.072563278Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863" Feb 13 15:33:15.073602 containerd[1452]: time="2025-02-13T15:33:15.073546020Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:15.077754 containerd[1452]: time="2025-02-13T15:33:15.077712910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:15.078790 containerd[1452]: time="2025-02-13T15:33:15.078766623Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.650238188s" Feb 13 15:33:15.078839 containerd[1452]: time="2025-02-13T15:33:15.078798139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:33:15.098104 containerd[1452]: time="2025-02-13T15:33:15.098066602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:33:16.470061 containerd[1452]: time="2025-02-13T15:33:16.469995037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:16.472271 containerd[1452]: time="2025-02-13T15:33:16.472221664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093" Feb 13 15:33:16.473272 containerd[1452]: time="2025-02-13T15:33:16.473243227Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:16.476419 containerd[1452]: time="2025-02-13T15:33:16.476374853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:16.478042 containerd[1452]: time="2025-02-13T15:33:16.478005965Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.379901055s" Feb 13 15:33:16.478080 containerd[1452]: time="2025-02-13T15:33:16.478044429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:33:16.497664 containerd[1452]: time="2025-02-13T15:33:16.497619835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:33:17.362129 containerd[1452]: time="2025-02-13T15:33:17.362082241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:17.362776 containerd[1452]: time="2025-02-13T15:33:17.362509764Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982" Feb 13 15:33:17.363426 containerd[1452]: time="2025-02-13T15:33:17.363374544Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:17.367203 containerd[1452]: time="2025-02-13T15:33:17.367165704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:17.368348 containerd[1452]: time="2025-02-13T15:33:17.368303112Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 870.642522ms" Feb 13 15:33:17.368348 containerd[1452]: time="2025-02-13T15:33:17.368341119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:33:17.386825 containerd[1452]: time="2025-02-13T15:33:17.386793019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:33:18.041624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:33:18.054661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:18.143727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:18.148092 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:18.193571 kubelet[1944]: E0213 15:33:18.193476 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:18.197092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:18.197220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:18.328085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769469084.mount: Deactivated successfully. Feb 13 15:33:18.656804 containerd[1452]: time="2025-02-13T15:33:18.656672453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:18.657249 containerd[1452]: time="2025-02-13T15:33:18.657208985Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:33:18.658100 containerd[1452]: time="2025-02-13T15:33:18.658057312Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:18.660166 containerd[1452]: time="2025-02-13T15:33:18.660129499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:18.660783 containerd[1452]: time="2025-02-13T15:33:18.660747842Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.273911377s" Feb 13 15:33:18.660821 containerd[1452]: time="2025-02-13T15:33:18.660783665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:33:18.679430 containerd[1452]: time="2025-02-13T15:33:18.679371661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:33:19.371539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837929013.mount: Deactivated successfully. Feb 13 15:33:20.052300 containerd[1452]: time="2025-02-13T15:33:20.052254873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.053268 containerd[1452]: time="2025-02-13T15:33:20.053010571Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:33:20.054031 containerd[1452]: time="2025-02-13T15:33:20.053969264Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.057162 containerd[1452]: time="2025-02-13T15:33:20.057112274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.058425 containerd[1452]: time="2025-02-13T15:33:20.058321914Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.378902023s" Feb 13 15:33:20.058425 containerd[1452]: time="2025-02-13T15:33:20.058353921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:33:20.076581 containerd[1452]: time="2025-02-13T15:33:20.076545662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:33:20.531826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099862235.mount: Deactivated successfully. Feb 13 15:33:20.536930 containerd[1452]: time="2025-02-13T15:33:20.536882509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.537723 containerd[1452]: time="2025-02-13T15:33:20.537684225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:33:20.538554 containerd[1452]: time="2025-02-13T15:33:20.538498543Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.543035 containerd[1452]: time="2025-02-13T15:33:20.541823331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:20.543035 containerd[1452]: time="2025-02-13T15:33:20.542632696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 466.04778ms" Feb 13 15:33:20.543035 containerd[1452]: time="2025-02-13T15:33:20.542655162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:33:20.561292 containerd[1452]: time="2025-02-13T15:33:20.561244439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:33:21.158096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174701057.mount: Deactivated successfully. Feb 13 15:33:22.769402 containerd[1452]: time="2025-02-13T15:33:22.769336649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:22.770460 containerd[1452]: time="2025-02-13T15:33:22.770419712Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Feb 13 15:33:22.771701 containerd[1452]: time="2025-02-13T15:33:22.771644599Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:22.775007 containerd[1452]: time="2025-02-13T15:33:22.774976356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:22.779178 containerd[1452]: time="2025-02-13T15:33:22.778987771Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.217706831s" Feb 13 15:33:22.779178 containerd[1452]: time="2025-02-13T15:33:22.779023991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:33:27.597199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:27.609601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:27.627496 systemd[1]: Reloading requested from client PID 2149 ('systemctl') (unit session-7.scope)... Feb 13 15:33:27.627511 systemd[1]: Reloading... Feb 13 15:33:27.705544 zram_generator::config[2191]: No configuration found. Feb 13 15:33:27.795336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:27.849325 systemd[1]: Reloading finished in 221 ms. Feb 13 15:33:27.893042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:27.895593 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:27.895784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:27.897158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:27.987522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:27.991049 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:28.032962 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:28.032962 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:28.032962 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:28.033279 kubelet[2235]: I0213 15:33:28.033009 2235 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:28.771575 kubelet[2235]: I0213 15:33:28.771055 2235 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:33:28.771575 kubelet[2235]: I0213 15:33:28.771091 2235 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:28.771575 kubelet[2235]: I0213 15:33:28.771410 2235 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:33:28.796044 kubelet[2235]: E0213 15:33:28.796009 2235 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.796134 kubelet[2235]: I0213 15:33:28.796124 2235 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:28.805153 kubelet[2235]: I0213 15:33:28.805117 2235 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:28.806055 kubelet[2235]: I0213 15:33:28.806015 2235 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:28.806236 kubelet[2235]: I0213 15:33:28.806213 2235 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:33:28.806317 kubelet[2235]: I0213 15:33:28.806239 2235 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:28.806317 kubelet[2235]: I0213 15:33:28.806248 2235 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:33:28.806379 kubelet[2235]: I0213 15:33:28.806364 2235 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:28.808448 kubelet[2235]: I0213 15:33:28.808418 2235 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:33:28.808448 kubelet[2235]: I0213 15:33:28.808449 2235 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:28.808517 kubelet[2235]: I0213 15:33:28.808473 2235 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:33:28.808517 kubelet[2235]: I0213 15:33:28.808487 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:28.809210 kubelet[2235]: W0213 15:33:28.809078 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.809210 kubelet[2235]: W0213 15:33:28.809109 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.809210 kubelet[2235]: E0213 15:33:28.809177 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.809210 kubelet[2235]: E0213 15:33:28.809140 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.810530 kubelet[2235]: I0213 15:33:28.810503 2235 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:28.811076 kubelet[2235]: I0213 15:33:28.811062 2235 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:28.811618 kubelet[2235]: W0213 15:33:28.811598 2235 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:33:28.813550 kubelet[2235]: I0213 15:33:28.812637 2235 server.go:1256] "Started kubelet" Feb 13 15:33:28.813550 kubelet[2235]: I0213 15:33:28.812690 2235 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:28.813550 kubelet[2235]: I0213 15:33:28.813453 2235 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:33:28.818507 kubelet[2235]: I0213 15:33:28.818477 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:28.818745 kubelet[2235]: I0213 15:33:28.818699 2235 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:28.822251 kubelet[2235]: I0213 15:33:28.822226 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:28.823098 kubelet[2235]: I0213 15:33:28.823067 2235 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:33:28.824029 kubelet[2235]: I0213 15:33:28.823428 2235 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:33:28.824029 kubelet[2235]: I0213 15:33:28.823501 2235 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:33:28.824263 kubelet[2235]: W0213 15:33:28.824148 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.824263 kubelet[2235]: E0213 15:33:28.824213 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.824327 kubelet[2235]: E0213 15:33:28.824292 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Feb 13 15:33:28.829257 kubelet[2235]: E0213 15:33:28.829226 2235 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce63b9fd4efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:33:28.81261337 +0000 UTC m=+0.818230607,LastTimestamp:2025-02-13 15:33:28.81261337 +0000 UTC m=+0.818230607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:33:28.829659 kubelet[2235]: I0213 15:33:28.829627 2235 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:28.829737 kubelet[2235]: I0213 15:33:28.829710 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:28.829990 kubelet[2235]: E0213 15:33:28.829953 2235 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:28.830871 kubelet[2235]: I0213 15:33:28.830850 2235 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:28.840712 kubelet[2235]: I0213 15:33:28.840596 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:28.841633 kubelet[2235]: I0213 15:33:28.841609 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:28.841722 kubelet[2235]: I0213 15:33:28.841713 2235 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:33:28.841784 kubelet[2235]: I0213 15:33:28.841775 2235 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:33:28.841876 kubelet[2235]: E0213 15:33:28.841864 2235 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:28.844795 kubelet[2235]: W0213 15:33:28.844765 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.844877 kubelet[2235]: E0213 15:33:28.844803 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:28.845629 kubelet[2235]: I0213 15:33:28.845357 2235 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:33:28.845629 kubelet[2235]: I0213 15:33:28.845373 2235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:28.845629 kubelet[2235]: I0213 15:33:28.845415 2235 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:28.924747 kubelet[2235]: I0213 15:33:28.924710 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:28.928694 kubelet[2235]: E0213 15:33:28.928674 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:28.942907 kubelet[2235]: E0213 15:33:28.942879 2235 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:33:29.025637 kubelet[2235]: E0213 15:33:29.025537 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Feb 13 15:33:29.038540 kubelet[2235]: I0213 15:33:29.038468 2235 policy_none.go:49] "None policy: Start" Feb 13 15:33:29.039185 kubelet[2235]: I0213 15:33:29.039168 2235 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:33:29.039238 kubelet[2235]: I0213 15:33:29.039210 2235 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:29.044379 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:33:29.058597 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:33:29.070608 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:33:29.071902 kubelet[2235]: I0213 15:33:29.071705 2235 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:29.071998 kubelet[2235]: I0213 15:33:29.071955 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:29.073428 kubelet[2235]: E0213 15:33:29.073408 2235 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:33:29.129774 kubelet[2235]: I0213 15:33:29.129750 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:29.130103 kubelet[2235]: E0213 15:33:29.130067 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:29.143302 kubelet[2235]: I0213 15:33:29.143276 2235 topology_manager.go:215] "Topology Admit Handler" podUID="5ad29580d0c7eb0ee1b326212ab66eb3" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:33:29.144209 kubelet[2235]: I0213 15:33:29.144183 2235 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:33:29.145166 kubelet[2235]: I0213 15:33:29.145142 2235 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:33:29.150491 systemd[1]: Created slice kubepods-burstable-pod5ad29580d0c7eb0ee1b326212ab66eb3.slice - libcontainer container kubepods-burstable-pod5ad29580d0c7eb0ee1b326212ab66eb3.slice. Feb 13 15:33:29.163368 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:33:29.176056 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:33:29.224976 kubelet[2235]: I0213 15:33:29.224921 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:29.224976 kubelet[2235]: I0213 15:33:29.224964 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:33:29.224976 kubelet[2235]: I0213 15:33:29.224991 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:29.225193 kubelet[2235]: I0213 15:33:29.225014 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:29.225193 kubelet[2235]: I0213 15:33:29.225064 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:29.225193 kubelet[2235]: I0213 15:33:29.225147 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:29.225193 kubelet[2235]: I0213 15:33:29.225186 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:29.225285 kubelet[2235]: I0213 15:33:29.225207 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:29.225285 kubelet[2235]: I0213 15:33:29.225228 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:29.426162 kubelet[2235]: E0213 15:33:29.426059 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Feb 13 15:33:29.463240 kubelet[2235]: E0213 15:33:29.463198 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:29.464120 containerd[1452]: time="2025-02-13T15:33:29.463885834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ad29580d0c7eb0ee1b326212ab66eb3,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:29.474105 kubelet[2235]: E0213 15:33:29.474076 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:29.474509 containerd[1452]: time="2025-02-13T15:33:29.474467788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:29.477858 kubelet[2235]: E0213 15:33:29.477836 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:29.478183 containerd[1452]: time="2025-02-13T15:33:29.478150917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:29.531823 kubelet[2235]: I0213 15:33:29.531792 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:29.532171 kubelet[2235]: E0213 15:33:29.532142 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:29.713230 kubelet[2235]: W0213 15:33:29.713106 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:29.713230 kubelet[2235]: E0213 15:33:29.713148 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:29.726768 kubelet[2235]: W0213 15:33:29.726704 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:29.726768 kubelet[2235]: E0213 15:33:29.726765 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:29.987957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535856122.mount: Deactivated successfully. Feb 13 15:33:29.992678 containerd[1452]: time="2025-02-13T15:33:29.992633505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:29.994932 containerd[1452]: time="2025-02-13T15:33:29.994736101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:33:29.996037 containerd[1452]: time="2025-02-13T15:33:29.995752771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:29.997590 containerd[1452]: time="2025-02-13T15:33:29.997561191Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:29.998029 containerd[1452]: time="2025-02-13T15:33:29.997987345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:29.999118 containerd[1452]: time="2025-02-13T15:33:29.999093590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:29.999545 containerd[1452]: time="2025-02-13T15:33:29.999511408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:30.001976 containerd[1452]: time="2025-02-13T15:33:30.001915744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:30.003192 kubelet[2235]: W0213 15:33:30.003163 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:30.003271 kubelet[2235]: E0213 15:33:30.003203 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:30.004049 containerd[1452]: time="2025-02-13T15:33:30.003888163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.678933ms" Feb 13 15:33:30.005636 containerd[1452]: time="2025-02-13T15:33:30.005611195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.64789ms" Feb 13 15:33:30.007577 containerd[1452]: time="2025-02-13T15:33:30.007442572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.902244ms" Feb 13 15:33:30.130982 containerd[1452]: time="2025-02-13T15:33:30.130433317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:30.130982 containerd[1452]: time="2025-02-13T15:33:30.130594393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:30.130982 containerd[1452]: time="2025-02-13T15:33:30.130605572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.130982 containerd[1452]: time="2025-02-13T15:33:30.130785320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.132796 containerd[1452]: time="2025-02-13T15:33:30.132725844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:30.132796 containerd[1452]: time="2025-02-13T15:33:30.132770401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:30.132796 containerd[1452]: time="2025-02-13T15:33:30.132788392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.132938 containerd[1452]: time="2025-02-13T15:33:30.132851540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.133580 containerd[1452]: time="2025-02-13T15:33:30.133443274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:30.133580 containerd[1452]: time="2025-02-13T15:33:30.133497046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:30.133580 containerd[1452]: time="2025-02-13T15:33:30.133514115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.133700 containerd[1452]: time="2025-02-13T15:33:30.133600543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:30.150658 systemd[1]: Started cri-containerd-3f7b0f373d5ee3960c2a52c02d84fd4d922217be19f7329dd9deab7d41135aa6.scope - libcontainer container 3f7b0f373d5ee3960c2a52c02d84fd4d922217be19f7329dd9deab7d41135aa6. Feb 13 15:33:30.154617 systemd[1]: Started cri-containerd-8b9fa0f6b15841989c8b54496026e522cf14f728ab7242e0aa3b3ab448f4a951.scope - libcontainer container 8b9fa0f6b15841989c8b54496026e522cf14f728ab7242e0aa3b3ab448f4a951. Feb 13 15:33:30.155550 systemd[1]: Started cri-containerd-90282192227ab28c27bfc91180f9414e3aa12aec9e048ad742b79f08b3e0eeb7.scope - libcontainer container 90282192227ab28c27bfc91180f9414e3aa12aec9e048ad742b79f08b3e0eeb7. Feb 13 15:33:30.162866 kubelet[2235]: W0213 15:33:30.162815 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:30.163117 kubelet[2235]: E0213 15:33:30.162873 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:30.187170 containerd[1452]: time="2025-02-13T15:33:30.187069064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ad29580d0c7eb0ee1b326212ab66eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f7b0f373d5ee3960c2a52c02d84fd4d922217be19f7329dd9deab7d41135aa6\"" Feb 13 15:33:30.188108 kubelet[2235]: E0213 15:33:30.188078 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.193241 containerd[1452]: time="2025-02-13T15:33:30.193207781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"90282192227ab28c27bfc91180f9414e3aa12aec9e048ad742b79f08b3e0eeb7\"" Feb 13 15:33:30.194557 containerd[1452]: time="2025-02-13T15:33:30.194523875Z" level=info msg="CreateContainer within sandbox \"3f7b0f373d5ee3960c2a52c02d84fd4d922217be19f7329dd9deab7d41135aa6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:33:30.194777 kubelet[2235]: E0213 15:33:30.194757 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.196082 containerd[1452]: time="2025-02-13T15:33:30.196055740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b9fa0f6b15841989c8b54496026e522cf14f728ab7242e0aa3b3ab448f4a951\"" Feb 13 15:33:30.198073 kubelet[2235]: E0213 15:33:30.198055 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.199560 containerd[1452]: time="2025-02-13T15:33:30.199532576Z" level=info msg="CreateContainer within sandbox \"90282192227ab28c27bfc91180f9414e3aa12aec9e048ad742b79f08b3e0eeb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:33:30.201638 containerd[1452]: time="2025-02-13T15:33:30.201610015Z" level=info msg="CreateContainer within sandbox \"8b9fa0f6b15841989c8b54496026e522cf14f728ab7242e0aa3b3ab448f4a951\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:33:30.213072 containerd[1452]: time="2025-02-13T15:33:30.213028177Z" level=info msg="CreateContainer within sandbox \"90282192227ab28c27bfc91180f9414e3aa12aec9e048ad742b79f08b3e0eeb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c90569de37d2ece93c3cba26ad75eab9ef59e5b210a2e214fc223c6bb92a81f3\"" Feb 13 15:33:30.213676 containerd[1452]: time="2025-02-13T15:33:30.213647758Z" level=info msg="StartContainer for \"c90569de37d2ece93c3cba26ad75eab9ef59e5b210a2e214fc223c6bb92a81f3\"" Feb 13 15:33:30.216497 containerd[1452]: time="2025-02-13T15:33:30.216036971Z" level=info msg="CreateContainer within sandbox \"3f7b0f373d5ee3960c2a52c02d84fd4d922217be19f7329dd9deab7d41135aa6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9314d24fe529b73125aa37805c4f8f0722d9fdc52c940b94d82de524d61ece91\"" Feb 13 15:33:30.216497 containerd[1452]: time="2025-02-13T15:33:30.216372065Z" level=info msg="StartContainer for \"9314d24fe529b73125aa37805c4f8f0722d9fdc52c940b94d82de524d61ece91\"" Feb 13 15:33:30.227202 kubelet[2235]: E0213 15:33:30.227169 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Feb 13 15:33:30.227695 containerd[1452]: time="2025-02-13T15:33:30.227662688Z" level=info msg="CreateContainer within sandbox \"8b9fa0f6b15841989c8b54496026e522cf14f728ab7242e0aa3b3ab448f4a951\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47c04aa8ebd82f81110c4ec23e4fcf9f462c6453055919e9fb2d25494a5f6fc1\"" Feb 13 15:33:30.228188 containerd[1452]: time="2025-02-13T15:33:30.228166471Z" level=info msg="StartContainer for \"47c04aa8ebd82f81110c4ec23e4fcf9f462c6453055919e9fb2d25494a5f6fc1\"" Feb 13 15:33:30.238543 systemd[1]: Started cri-containerd-c90569de37d2ece93c3cba26ad75eab9ef59e5b210a2e214fc223c6bb92a81f3.scope - libcontainer container c90569de37d2ece93c3cba26ad75eab9ef59e5b210a2e214fc223c6bb92a81f3. Feb 13 15:33:30.246542 systemd[1]: Started cri-containerd-9314d24fe529b73125aa37805c4f8f0722d9fdc52c940b94d82de524d61ece91.scope - libcontainer container 9314d24fe529b73125aa37805c4f8f0722d9fdc52c940b94d82de524d61ece91. Feb 13 15:33:30.249130 systemd[1]: Started cri-containerd-47c04aa8ebd82f81110c4ec23e4fcf9f462c6453055919e9fb2d25494a5f6fc1.scope - libcontainer container 47c04aa8ebd82f81110c4ec23e4fcf9f462c6453055919e9fb2d25494a5f6fc1. Feb 13 15:33:30.303516 containerd[1452]: time="2025-02-13T15:33:30.303472724Z" level=info msg="StartContainer for \"c90569de37d2ece93c3cba26ad75eab9ef59e5b210a2e214fc223c6bb92a81f3\" returns successfully" Feb 13 15:33:30.303649 containerd[1452]: time="2025-02-13T15:33:30.303626708Z" level=info msg="StartContainer for \"9314d24fe529b73125aa37805c4f8f0722d9fdc52c940b94d82de524d61ece91\" returns successfully" Feb 13 15:33:30.303674 containerd[1452]: time="2025-02-13T15:33:30.303657681Z" level=info msg="StartContainer for \"47c04aa8ebd82f81110c4ec23e4fcf9f462c6453055919e9fb2d25494a5f6fc1\" returns successfully" Feb 13 15:33:30.335285 kubelet[2235]: I0213 15:33:30.335009 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:30.335526 kubelet[2235]: E0213 15:33:30.335503 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:30.853085 kubelet[2235]: E0213 15:33:30.852909 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.854629 kubelet[2235]: E0213 15:33:30.854542 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.855525 kubelet[2235]: E0213 15:33:30.855507 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:31.830562 kubelet[2235]: E0213 15:33:31.830525 2235 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:33:31.856209 kubelet[2235]: E0213 15:33:31.856141 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:31.937622 kubelet[2235]: I0213 15:33:31.937344 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:31.943522 kubelet[2235]: I0213 15:33:31.943491 2235 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:33:31.950156 kubelet[2235]: E0213 15:33:31.950113 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.050724 kubelet[2235]: E0213 15:33:32.050673 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.151926 kubelet[2235]: E0213 15:33:32.151490 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.252063 kubelet[2235]: E0213 15:33:32.252022 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.352748 kubelet[2235]: E0213 15:33:32.352710 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.453543 kubelet[2235]: E0213 15:33:32.453181 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.553707 kubelet[2235]: E0213 15:33:32.553650 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:32.596755 kubelet[2235]: E0213 15:33:32.596695 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:32.811411 kubelet[2235]: I0213 15:33:32.811265 2235 apiserver.go:52] "Watching apiserver" Feb 13 15:33:32.824294 kubelet[2235]: I0213 15:33:32.824243 2235 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:33:32.869173 kubelet[2235]: E0213 15:33:32.869139 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:33.858196 kubelet[2235]: E0213 15:33:33.858158 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:34.114450 systemd[1]: Reloading requested from client PID 2514 ('systemctl') (unit session-7.scope)... Feb 13 15:33:34.114472 systemd[1]: Reloading... Feb 13 15:33:34.176418 zram_generator::config[2553]: No configuration found. Feb 13 15:33:34.269249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:34.334210 systemd[1]: Reloading finished in 219 ms. Feb 13 15:33:34.366548 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:34.385468 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:34.385709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:34.385765 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 114.7M memory peak, 0B memory swap peak. Feb 13 15:33:34.395759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:34.492438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:34.497144 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:34.540693 kubelet[2595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:34.540693 kubelet[2595]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:34.540693 kubelet[2595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:34.541051 kubelet[2595]: I0213 15:33:34.540769 2595 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:34.545667 kubelet[2595]: I0213 15:33:34.545633 2595 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:33:34.545778 kubelet[2595]: I0213 15:33:34.545685 2595 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:34.545942 kubelet[2595]: I0213 15:33:34.545914 2595 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:33:34.548029 kubelet[2595]: I0213 15:33:34.547686 2595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:33:34.549640 kubelet[2595]: I0213 15:33:34.549527 2595 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:34.559784 kubelet[2595]: I0213 15:33:34.559744 2595 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:34.560437 kubelet[2595]: I0213 15:33:34.560040 2595 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:34.560437 kubelet[2595]: I0213 15:33:34.560229 2595 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:33:34.560437 kubelet[2595]: I0213 15:33:34.560255 2595 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:34.560437 kubelet[2595]: I0213 15:33:34.560265 2595 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:33:34.560437 kubelet[2595]: I0213 15:33:34.560296 2595 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:34.560663 kubelet[2595]: I0213 15:33:34.560457 2595 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:33:34.560663 kubelet[2595]: I0213 15:33:34.560476 2595 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:34.561172 kubelet[2595]: I0213 15:33:34.560909 2595 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:33:34.561172 kubelet[2595]: I0213 15:33:34.560936 2595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:34.561677 kubelet[2595]: I0213 15:33:34.561658 2595 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:34.562240 kubelet[2595]: I0213 15:33:34.561972 2595 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:34.562855 kubelet[2595]: I0213 15:33:34.562839 2595 server.go:1256] "Started kubelet" Feb 13 15:33:34.563558 kubelet[2595]: I0213 15:33:34.563523 2595 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:34.563718 kubelet[2595]: I0213 15:33:34.563694 2595 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:34.564368 kubelet[2595]: I0213 15:33:34.564350 2595 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:34.565072 kubelet[2595]: I0213 15:33:34.564455 2595 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:33:34.567640 kubelet[2595]: I0213 15:33:34.565675 2595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:34.568740 kubelet[2595]: E0213 15:33:34.567109 2595 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:34.568820 kubelet[2595]: I0213 15:33:34.568775 2595 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:33:34.568918 kubelet[2595]: I0213 15:33:34.568894 2595 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:33:34.569160 kubelet[2595]: I0213 15:33:34.569032 2595 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:33:34.570534 kubelet[2595]: I0213 15:33:34.570515 2595 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:34.570740 kubelet[2595]: I0213 15:33:34.570717 2595 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:34.572079 kubelet[2595]: I0213 15:33:34.572061 2595 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:34.583722 sudo[2615]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:33:34.584089 sudo[2615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:33:34.599673 kubelet[2595]: I0213 15:33:34.599473 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:34.605256 kubelet[2595]: I0213 15:33:34.605215 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:34.605256 kubelet[2595]: I0213 15:33:34.605247 2595 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:33:34.605256 kubelet[2595]: I0213 15:33:34.605263 2595 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:33:34.606331 kubelet[2595]: E0213 15:33:34.606310 2595 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:34.634634 kubelet[2595]: I0213 15:33:34.634535 2595 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:33:34.634634 kubelet[2595]: I0213 15:33:34.634559 2595 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:34.634634 kubelet[2595]: I0213 15:33:34.634577 2595 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:34.634785 kubelet[2595]: I0213 15:33:34.634729 2595 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:33:34.634785 kubelet[2595]: I0213 15:33:34.634750 2595 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:33:34.634785 kubelet[2595]: I0213 15:33:34.634757 2595 policy_none.go:49] "None policy: Start" Feb 13 15:33:34.635381 kubelet[2595]: I0213 15:33:34.635362 2595 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:33:34.635381 kubelet[2595]: I0213 15:33:34.635386 2595 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:34.635571 kubelet[2595]: I0213 15:33:34.635557 2595 state_mem.go:75] "Updated machine memory state" Feb 13 15:33:34.643418 kubelet[2595]: I0213 15:33:34.643372 2595 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:34.643967 kubelet[2595]: I0213 15:33:34.643844 2595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:34.672940 kubelet[2595]: I0213 15:33:34.672910 2595 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:34.678938 kubelet[2595]: I0213 15:33:34.678907 2595 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:33:34.679072 kubelet[2595]: I0213 15:33:34.678991 2595 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:33:34.708743 kubelet[2595]: I0213 15:33:34.708699 2595 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:33:34.708862 kubelet[2595]: I0213 15:33:34.708808 2595 topology_manager.go:215] "Topology Admit Handler" podUID="5ad29580d0c7eb0ee1b326212ab66eb3" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:33:34.708909 kubelet[2595]: I0213 15:33:34.708874 2595 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:33:34.716573 kubelet[2595]: E0213 15:33:34.716536 2595 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:34.870063 kubelet[2595]: I0213 15:33:34.869731 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:34.870063 kubelet[2595]: I0213 15:33:34.869787 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:34.870063 kubelet[2595]: I0213 15:33:34.869812 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:34.870063 kubelet[2595]: I0213 15:33:34.869833 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:33:34.870063 kubelet[2595]: I0213 15:33:34.869862 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:34.870283 kubelet[2595]: I0213 15:33:34.869897 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad29580d0c7eb0ee1b326212ab66eb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ad29580d0c7eb0ee1b326212ab66eb3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:34.870283 kubelet[2595]: I0213 15:33:34.869920 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:34.870283 kubelet[2595]: I0213 15:33:34.869942 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:34.870283 kubelet[2595]: I0213 15:33:34.869987 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:35.016161 kubelet[2595]: E0213 15:33:35.016053 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.016914 kubelet[2595]: E0213 15:33:35.016890 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.017359 kubelet[2595]: E0213 15:33:35.017322 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.025276 sudo[2615]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:35.561848 kubelet[2595]: I0213 15:33:35.561793 2595 apiserver.go:52] "Watching apiserver" Feb 13 15:33:35.569730 kubelet[2595]: I0213 15:33:35.569692 2595 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:33:35.618535 kubelet[2595]: E0213 15:33:35.618051 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.618535 kubelet[2595]: E0213 15:33:35.618276 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.623867 kubelet[2595]: E0213 15:33:35.623345 2595 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:33:35.623867 kubelet[2595]: E0213 15:33:35.623636 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.651866 kubelet[2595]: I0213 15:33:35.651646 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.651605005 podStartE2EDuration="1.651605005s" podCreationTimestamp="2025-02-13 15:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:35.643726196 +0000 UTC m=+1.143178250" watchObservedRunningTime="2025-02-13 15:33:35.651605005 +0000 UTC m=+1.151057019" Feb 13 15:33:35.659927 kubelet[2595]: I0213 15:33:35.659616 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.659573532 podStartE2EDuration="3.659573532s" podCreationTimestamp="2025-02-13 15:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:35.652332564 +0000 UTC m=+1.151784538" watchObservedRunningTime="2025-02-13 15:33:35.659573532 +0000 UTC m=+1.159025546" Feb 13 15:33:35.668849 kubelet[2595]: I0213 15:33:35.668796 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.668759211 podStartE2EDuration="1.668759211s" podCreationTimestamp="2025-02-13 15:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:35.660050672 +0000 UTC m=+1.159502686" watchObservedRunningTime="2025-02-13 15:33:35.668759211 +0000 UTC m=+1.168211225" Feb 13 15:33:36.618172 kubelet[2595]: E0213 15:33:36.617941 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:36.618172 kubelet[2595]: E0213 15:33:36.618084 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:36.768784 sudo[1625]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:36.769936 sshd[1624]: Connection closed by 10.0.0.1 port 58040 Feb 13 15:33:36.770458 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:36.774139 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:58040.service: Deactivated successfully. Feb 13 15:33:36.775799 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:33:36.775972 systemd[1]: session-7.scope: Consumed 7.368s CPU time, 190.6M memory peak, 0B memory swap peak. Feb 13 15:33:36.776430 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:33:36.777434 systemd-logind[1427]: Removed session 7. Feb 13 15:33:37.620006 kubelet[2595]: E0213 15:33:37.619972 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:41.778430 kubelet[2595]: E0213 15:33:41.777442 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:42.626450 kubelet[2595]: E0213 15:33:42.626153 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:44.042031 kubelet[2595]: E0213 15:33:44.041983 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:44.629231 kubelet[2595]: E0213 15:33:44.628939 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:46.839626 kubelet[2595]: E0213 15:33:46.837354 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:47.633859 kubelet[2595]: E0213 15:33:47.633101 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:48.701831 kubelet[2595]: I0213 15:33:48.701758 2595 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:33:48.702193 containerd[1452]: time="2025-02-13T15:33:48.702122618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:33:48.703204 kubelet[2595]: I0213 15:33:48.702478 2595 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:33:48.736285 kubelet[2595]: I0213 15:33:48.735349 2595 topology_manager.go:215] "Topology Admit Handler" podUID="e782aef0-0b56-452a-8a56-22b885145f2a" podNamespace="kube-system" podName="kube-proxy-9vn2s" Feb 13 15:33:48.742873 kubelet[2595]: I0213 15:33:48.742833 2595 topology_manager.go:215] "Topology Admit Handler" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" podNamespace="kube-system" podName="cilium-crc8g" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765428 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e782aef0-0b56-452a-8a56-22b885145f2a-lib-modules\") pod \"kube-proxy-9vn2s\" (UID: \"e782aef0-0b56-452a-8a56-22b885145f2a\") " pod="kube-system/kube-proxy-9vn2s" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765475 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-cgroup\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765500 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24af0153-5005-4f62-b880-72fc5025b2c2-clustermesh-secrets\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765521 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-run\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765614 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-net\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.768489 kubelet[2595]: I0213 15:33:48.765678 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cni-path\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.768327 systemd[1]: Created slice kubepods-besteffort-pode782aef0_0b56_452a_8a56_22b885145f2a.slice - libcontainer container kubepods-besteffort-pode782aef0_0b56_452a_8a56_22b885145f2a.slice. Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765719 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e782aef0-0b56-452a-8a56-22b885145f2a-xtables-lock\") pod \"kube-proxy-9vn2s\" (UID: \"e782aef0-0b56-452a-8a56-22b885145f2a\") " pod="kube-system/kube-proxy-9vn2s" Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765748 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e782aef0-0b56-452a-8a56-22b885145f2a-kube-proxy\") pod \"kube-proxy-9vn2s\" (UID: \"e782aef0-0b56-452a-8a56-22b885145f2a\") " pod="kube-system/kube-proxy-9vn2s" Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765769 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-hubble-tls\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765803 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-xtables-lock\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765843 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-bpf-maps\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769534 kubelet[2595]: I0213 15:33:48.765864 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-hostproc\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769702 kubelet[2595]: I0213 15:33:48.765884 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-lib-modules\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769702 kubelet[2595]: I0213 15:33:48.765904 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkrw\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-kube-api-access-gzkrw\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769702 kubelet[2595]: I0213 15:33:48.765948 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcb59\" (UniqueName: \"kubernetes.io/projected/e782aef0-0b56-452a-8a56-22b885145f2a-kube-api-access-fcb59\") pod \"kube-proxy-9vn2s\" (UID: \"e782aef0-0b56-452a-8a56-22b885145f2a\") " pod="kube-system/kube-proxy-9vn2s" Feb 13 15:33:48.769702 kubelet[2595]: I0213 15:33:48.765981 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-kernel\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769702 kubelet[2595]: I0213 15:33:48.766033 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-etc-cni-netd\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.769847 kubelet[2595]: I0213 15:33:48.766057 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-config-path\") pod \"cilium-crc8g\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " pod="kube-system/cilium-crc8g" Feb 13 15:33:48.785580 systemd[1]: Created slice kubepods-burstable-pod24af0153_5005_4f62_b880_72fc5025b2c2.slice - libcontainer container kubepods-burstable-pod24af0153_5005_4f62_b880_72fc5025b2c2.slice. Feb 13 15:33:48.817707 kubelet[2595]: I0213 15:33:48.817018 2595 topology_manager.go:215] "Topology Admit Handler" podUID="ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" podNamespace="kube-system" podName="cilium-operator-5cc964979-q4k76" Feb 13 15:33:48.827718 systemd[1]: Created slice kubepods-besteffort-podce7691a5_4d42_47cf_b12b_e4016c5ee3a7.slice - libcontainer container kubepods-besteffort-podce7691a5_4d42_47cf_b12b_e4016c5ee3a7.slice. Feb 13 15:33:48.866816 kubelet[2595]: I0213 15:33:48.866762 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psk45\" (UniqueName: \"kubernetes.io/projected/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-kube-api-access-psk45\") pod \"cilium-operator-5cc964979-q4k76\" (UID: \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\") " pod="kube-system/cilium-operator-5cc964979-q4k76" Feb 13 15:33:48.871191 kubelet[2595]: I0213 15:33:48.868234 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-cilium-config-path\") pod \"cilium-operator-5cc964979-q4k76\" (UID: \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\") " pod="kube-system/cilium-operator-5cc964979-q4k76" Feb 13 15:33:49.078918 kubelet[2595]: E0213 15:33:49.078651 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.079752 containerd[1452]: time="2025-02-13T15:33:49.079475424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vn2s,Uid:e782aef0-0b56-452a-8a56-22b885145f2a,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:49.092790 kubelet[2595]: E0213 15:33:49.092741 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.094477 containerd[1452]: time="2025-02-13T15:33:49.094419085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crc8g,Uid:24af0153-5005-4f62-b880-72fc5025b2c2,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:49.103234 containerd[1452]: time="2025-02-13T15:33:49.103109028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:49.103234 containerd[1452]: time="2025-02-13T15:33:49.103162134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:49.103418 containerd[1452]: time="2025-02-13T15:33:49.103173539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.103418 containerd[1452]: time="2025-02-13T15:33:49.103352146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.123307 containerd[1452]: time="2025-02-13T15:33:49.123186584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:49.123437 containerd[1452]: time="2025-02-13T15:33:49.123254937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:49.123437 containerd[1452]: time="2025-02-13T15:33:49.123323890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.123508 containerd[1452]: time="2025-02-13T15:33:49.123462398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.126585 systemd[1]: Started cri-containerd-c5842de7f21d8de3da6dce77f7602ae3d92c28e0652037ece7133a039464b3e2.scope - libcontainer container c5842de7f21d8de3da6dce77f7602ae3d92c28e0652037ece7133a039464b3e2. Feb 13 15:33:49.132335 kubelet[2595]: E0213 15:33:49.132306 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.132985 containerd[1452]: time="2025-02-13T15:33:49.132953850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q4k76,Uid:ce7691a5-4d42-47cf-b12b-e4016c5ee3a7,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:49.145550 systemd[1]: Started cri-containerd-c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8.scope - libcontainer container c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8. Feb 13 15:33:49.160674 containerd[1452]: time="2025-02-13T15:33:49.160628697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vn2s,Uid:e782aef0-0b56-452a-8a56-22b885145f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5842de7f21d8de3da6dce77f7602ae3d92c28e0652037ece7133a039464b3e2\"" Feb 13 15:33:49.163722 kubelet[2595]: E0213 15:33:49.163688 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.170820 containerd[1452]: time="2025-02-13T15:33:49.168976274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:49.170820 containerd[1452]: time="2025-02-13T15:33:49.170077409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:49.170820 containerd[1452]: time="2025-02-13T15:33:49.170091616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.170820 containerd[1452]: time="2025-02-13T15:33:49.170177817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:49.178036 containerd[1452]: time="2025-02-13T15:33:49.177986772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crc8g,Uid:24af0153-5005-4f62-b880-72fc5025b2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\"" Feb 13 15:33:49.178730 containerd[1452]: time="2025-02-13T15:33:49.178702240Z" level=info msg="CreateContainer within sandbox \"c5842de7f21d8de3da6dce77f7602ae3d92c28e0652037ece7133a039464b3e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:33:49.180608 kubelet[2595]: E0213 15:33:49.180573 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.182959 containerd[1452]: time="2025-02-13T15:33:49.182927333Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:33:49.195604 systemd[1]: Started cri-containerd-241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c.scope - libcontainer container 241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c. Feb 13 15:33:49.203953 containerd[1452]: time="2025-02-13T15:33:49.203784107Z" level=info msg="CreateContainer within sandbox \"c5842de7f21d8de3da6dce77f7602ae3d92c28e0652037ece7133a039464b3e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1748f7a6dd9b8cd30618149b9dc21abb09958d42882ab017740b2592562e1a6d\"" Feb 13 15:33:49.204612 containerd[1452]: time="2025-02-13T15:33:49.204551000Z" level=info msg="StartContainer for \"1748f7a6dd9b8cd30618149b9dc21abb09958d42882ab017740b2592562e1a6d\"" Feb 13 15:33:49.249900 containerd[1452]: time="2025-02-13T15:33:49.249755605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q4k76,Uid:ce7691a5-4d42-47cf-b12b-e4016c5ee3a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\"" Feb 13 15:33:49.250550 kubelet[2595]: E0213 15:33:49.250531 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.253619 systemd[1]: Started cri-containerd-1748f7a6dd9b8cd30618149b9dc21abb09958d42882ab017740b2592562e1a6d.scope - libcontainer container 1748f7a6dd9b8cd30618149b9dc21abb09958d42882ab017740b2592562e1a6d. Feb 13 15:33:49.281941 containerd[1452]: time="2025-02-13T15:33:49.281879975Z" level=info msg="StartContainer for \"1748f7a6dd9b8cd30618149b9dc21abb09958d42882ab017740b2592562e1a6d\" returns successfully" Feb 13 15:33:49.640673 kubelet[2595]: E0213 15:33:49.640624 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:49.652546 kubelet[2595]: I0213 15:33:49.652495 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9vn2s" podStartSLOduration=1.652455963 podStartE2EDuration="1.652455963s" podCreationTimestamp="2025-02-13 15:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:49.651840544 +0000 UTC m=+15.151292558" watchObservedRunningTime="2025-02-13 15:33:49.652455963 +0000 UTC m=+15.151907977" Feb 13 15:33:51.082786 update_engine[1432]: I20250213 15:33:51.082715 1432 update_attempter.cc:509] Updating boot flags... Feb 13 15:33:51.100667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2959) Feb 13 15:33:51.142492 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2907) Feb 13 15:33:56.488349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237493367.mount: Deactivated successfully. Feb 13 15:33:58.599491 containerd[1452]: time="2025-02-13T15:33:58.599428158Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:58.603745 containerd[1452]: time="2025-02-13T15:33:58.603679779Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:33:58.604604 containerd[1452]: time="2025-02-13T15:33:58.604543171Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:58.607016 containerd[1452]: time="2025-02-13T15:33:58.606963934Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.421962795s" Feb 13 15:33:58.607016 containerd[1452]: time="2025-02-13T15:33:58.607003747Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:33:58.615758 containerd[1452]: time="2025-02-13T15:33:58.615695087Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:33:58.616726 containerd[1452]: time="2025-02-13T15:33:58.616236458Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:33:58.645338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639936115.mount: Deactivated successfully. Feb 13 15:33:58.650838 containerd[1452]: time="2025-02-13T15:33:58.650779908Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\"" Feb 13 15:33:58.651368 containerd[1452]: time="2025-02-13T15:33:58.651329442Z" level=info msg="StartContainer for \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\"" Feb 13 15:33:58.682618 systemd[1]: Started cri-containerd-aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150.scope - libcontainer container aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150. Feb 13 15:33:58.725718 containerd[1452]: time="2025-02-13T15:33:58.725659116Z" level=info msg="StartContainer for \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\" returns successfully" Feb 13 15:33:58.785290 systemd[1]: cri-containerd-aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150.scope: Deactivated successfully. Feb 13 15:33:58.983662 containerd[1452]: time="2025-02-13T15:33:58.983492046Z" level=info msg="shim disconnected" id=aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150 namespace=k8s.io Feb 13 15:33:58.983662 containerd[1452]: time="2025-02-13T15:33:58.983546943Z" level=warning msg="cleaning up after shim disconnected" id=aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150 namespace=k8s.io Feb 13 15:33:58.983662 containerd[1452]: time="2025-02-13T15:33:58.983559107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:59.634458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150-rootfs.mount: Deactivated successfully. Feb 13 15:33:59.676765 kubelet[2595]: E0213 15:33:59.676718 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:59.680165 containerd[1452]: time="2025-02-13T15:33:59.680113764Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:33:59.707342 containerd[1452]: time="2025-02-13T15:33:59.707280161Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\"" Feb 13 15:33:59.708064 containerd[1452]: time="2025-02-13T15:33:59.707824045Z" level=info msg="StartContainer for \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\"" Feb 13 15:33:59.762605 systemd[1]: Started cri-containerd-f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e.scope - libcontainer container f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e. Feb 13 15:33:59.794922 containerd[1452]: time="2025-02-13T15:33:59.794848903Z" level=info msg="StartContainer for \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\" returns successfully" Feb 13 15:33:59.831305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:33:59.831750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:59.831836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:59.845026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:59.851310 systemd[1]: cri-containerd-f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e.scope: Deactivated successfully. Feb 13 15:33:59.883025 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:59.883317 containerd[1452]: time="2025-02-13T15:33:59.883022708Z" level=info msg="shim disconnected" id=f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e namespace=k8s.io Feb 13 15:33:59.883317 containerd[1452]: time="2025-02-13T15:33:59.883100412Z" level=warning msg="cleaning up after shim disconnected" id=f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e namespace=k8s.io Feb 13 15:33:59.883317 containerd[1452]: time="2025-02-13T15:33:59.883108694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:59.895952 containerd[1452]: time="2025-02-13T15:33:59.895740226Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:33:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:00.111441 containerd[1452]: time="2025-02-13T15:34:00.111349314Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:00.111800 containerd[1452]: time="2025-02-13T15:34:00.111757311Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:34:00.112547 containerd[1452]: time="2025-02-13T15:34:00.112523093Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:00.114041 containerd[1452]: time="2025-02-13T15:34:00.114006442Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.497732413s" Feb 13 15:34:00.114099 containerd[1452]: time="2025-02-13T15:34:00.114042372Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:34:00.116270 containerd[1452]: time="2025-02-13T15:34:00.116222922Z" level=info msg="CreateContainer within sandbox \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:34:00.125602 containerd[1452]: time="2025-02-13T15:34:00.125553299Z" level=info msg="CreateContainer within sandbox \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\"" Feb 13 15:34:00.126131 containerd[1452]: time="2025-02-13T15:34:00.126090054Z" level=info msg="StartContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\"" Feb 13 15:34:00.152577 systemd[1]: Started cri-containerd-e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936.scope - libcontainer container e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936. Feb 13 15:34:00.172502 containerd[1452]: time="2025-02-13T15:34:00.172462578Z" level=info msg="StartContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" returns successfully" Feb 13 15:34:00.644027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e-rootfs.mount: Deactivated successfully. Feb 13 15:34:00.646683 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:43132.service - OpenSSH per-connection server daemon (10.0.0.1:43132). Feb 13 15:34:00.688197 kubelet[2595]: E0213 15:34:00.688091 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:00.695589 sshd[3179]: Accepted publickey for core from 10.0.0.1 port 43132 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:00.698248 sshd-session[3179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:00.701813 kubelet[2595]: E0213 15:34:00.701328 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:00.701982 containerd[1452]: time="2025-02-13T15:34:00.701656174Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:34:00.710761 systemd-logind[1427]: New session 8 of user core. Feb 13 15:34:00.719467 kubelet[2595]: I0213 15:34:00.717489 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-q4k76" podStartSLOduration=1.854521485 podStartE2EDuration="12.717358673s" podCreationTimestamp="2025-02-13 15:33:48 +0000 UTC" firstStartedPulling="2025-02-13 15:33:49.251383556 +0000 UTC m=+14.750835570" lastFinishedPulling="2025-02-13 15:34:00.114220744 +0000 UTC m=+25.613672758" observedRunningTime="2025-02-13 15:34:00.716111833 +0000 UTC m=+26.215563887" watchObservedRunningTime="2025-02-13 15:34:00.717358673 +0000 UTC m=+26.216810687" Feb 13 15:34:00.719404 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:34:00.745204 containerd[1452]: time="2025-02-13T15:34:00.745133021Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\"" Feb 13 15:34:00.748496 containerd[1452]: time="2025-02-13T15:34:00.745788490Z" level=info msg="StartContainer for \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\"" Feb 13 15:34:00.797617 systemd[1]: Started cri-containerd-326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db.scope - libcontainer container 326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db. Feb 13 15:34:00.831125 containerd[1452]: time="2025-02-13T15:34:00.830767372Z" level=info msg="StartContainer for \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\" returns successfully" Feb 13 15:34:00.840840 systemd[1]: cri-containerd-326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db.scope: Deactivated successfully. Feb 13 15:34:00.933517 sshd[3181]: Connection closed by 10.0.0.1 port 43132 Feb 13 15:34:00.934129 sshd-session[3179]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:00.939200 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:43132.service: Deactivated successfully. Feb 13 15:34:00.943142 containerd[1452]: time="2025-02-13T15:34:00.943044025Z" level=info msg="shim disconnected" id=326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db namespace=k8s.io Feb 13 15:34:00.943142 containerd[1452]: time="2025-02-13T15:34:00.943112284Z" level=warning msg="cleaning up after shim disconnected" id=326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db namespace=k8s.io Feb 13 15:34:00.943142 containerd[1452]: time="2025-02-13T15:34:00.943121807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:00.944314 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:34:00.950198 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:34:00.954895 systemd-logind[1427]: Removed session 8. Feb 13 15:34:01.642749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db-rootfs.mount: Deactivated successfully. Feb 13 15:34:01.691939 kubelet[2595]: E0213 15:34:01.691900 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.692865 kubelet[2595]: E0213 15:34:01.692831 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.696118 containerd[1452]: time="2025-02-13T15:34:01.696032957Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:34:01.714129 containerd[1452]: time="2025-02-13T15:34:01.714063594Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\"" Feb 13 15:34:01.720282 containerd[1452]: time="2025-02-13T15:34:01.719313889Z" level=info msg="StartContainer for \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\"" Feb 13 15:34:01.762375 systemd[1]: Started cri-containerd-6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae.scope - libcontainer container 6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae. Feb 13 15:34:01.784098 systemd[1]: cri-containerd-6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae.scope: Deactivated successfully. Feb 13 15:34:01.786441 containerd[1452]: time="2025-02-13T15:34:01.786345826Z" level=info msg="StartContainer for \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\" returns successfully" Feb 13 15:34:01.809001 containerd[1452]: time="2025-02-13T15:34:01.808895675Z" level=info msg="shim disconnected" id=6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae namespace=k8s.io Feb 13 15:34:01.809001 containerd[1452]: time="2025-02-13T15:34:01.808959373Z" level=warning msg="cleaning up after shim disconnected" id=6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae namespace=k8s.io Feb 13 15:34:01.809001 containerd[1452]: time="2025-02-13T15:34:01.808967735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:02.642829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae-rootfs.mount: Deactivated successfully. Feb 13 15:34:02.701355 kubelet[2595]: E0213 15:34:02.701329 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:02.703992 containerd[1452]: time="2025-02-13T15:34:02.703719462Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:34:02.722830 containerd[1452]: time="2025-02-13T15:34:02.722771609Z" level=info msg="CreateContainer within sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\"" Feb 13 15:34:02.723417 containerd[1452]: time="2025-02-13T15:34:02.723371129Z" level=info msg="StartContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\"" Feb 13 15:34:02.753631 systemd[1]: Started cri-containerd-d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69.scope - libcontainer container d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69. Feb 13 15:34:02.788561 containerd[1452]: time="2025-02-13T15:34:02.788433394Z" level=info msg="StartContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" returns successfully" Feb 13 15:34:02.889626 kubelet[2595]: I0213 15:34:02.889586 2595 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:34:02.942642 kubelet[2595]: I0213 15:34:02.942122 2595 topology_manager.go:215] "Topology Admit Handler" podUID="fd772383-833b-4115-8f02-e2524c801722" podNamespace="kube-system" podName="coredns-76f75df574-pvpjh" Feb 13 15:34:02.942642 kubelet[2595]: I0213 15:34:02.942423 2595 topology_manager.go:215] "Topology Admit Handler" podUID="423472c7-a04e-4ff4-bb7e-cf8788b3be42" podNamespace="kube-system" podName="coredns-76f75df574-ggnzt" Feb 13 15:34:02.965145 systemd[1]: Created slice kubepods-burstable-podfd772383_833b_4115_8f02_e2524c801722.slice - libcontainer container kubepods-burstable-podfd772383_833b_4115_8f02_e2524c801722.slice. Feb 13 15:34:02.971771 kubelet[2595]: I0213 15:34:02.971621 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fv7\" (UniqueName: \"kubernetes.io/projected/423472c7-a04e-4ff4-bb7e-cf8788b3be42-kube-api-access-c5fv7\") pod \"coredns-76f75df574-ggnzt\" (UID: \"423472c7-a04e-4ff4-bb7e-cf8788b3be42\") " pod="kube-system/coredns-76f75df574-ggnzt" Feb 13 15:34:02.971771 kubelet[2595]: I0213 15:34:02.971726 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423472c7-a04e-4ff4-bb7e-cf8788b3be42-config-volume\") pod \"coredns-76f75df574-ggnzt\" (UID: \"423472c7-a04e-4ff4-bb7e-cf8788b3be42\") " pod="kube-system/coredns-76f75df574-ggnzt" Feb 13 15:34:02.971771 kubelet[2595]: I0213 15:34:02.971774 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd772383-833b-4115-8f02-e2524c801722-config-volume\") pod \"coredns-76f75df574-pvpjh\" (UID: \"fd772383-833b-4115-8f02-e2524c801722\") " pod="kube-system/coredns-76f75df574-pvpjh" Feb 13 15:34:02.971941 kubelet[2595]: I0213 15:34:02.971806 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlxfh\" (UniqueName: \"kubernetes.io/projected/fd772383-833b-4115-8f02-e2524c801722-kube-api-access-mlxfh\") pod \"coredns-76f75df574-pvpjh\" (UID: \"fd772383-833b-4115-8f02-e2524c801722\") " pod="kube-system/coredns-76f75df574-pvpjh" Feb 13 15:34:02.973335 systemd[1]: Created slice kubepods-burstable-pod423472c7_a04e_4ff4_bb7e_cf8788b3be42.slice - libcontainer container kubepods-burstable-pod423472c7_a04e_4ff4_bb7e_cf8788b3be42.slice. Feb 13 15:34:03.271383 kubelet[2595]: E0213 15:34:03.271194 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:03.272525 containerd[1452]: time="2025-02-13T15:34:03.272408002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pvpjh,Uid:fd772383-833b-4115-8f02-e2524c801722,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:03.277186 kubelet[2595]: E0213 15:34:03.277153 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:03.277651 containerd[1452]: time="2025-02-13T15:34:03.277616653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggnzt,Uid:423472c7-a04e-4ff4-bb7e-cf8788b3be42,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:03.706223 kubelet[2595]: E0213 15:34:03.706181 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:03.720415 kubelet[2595]: I0213 15:34:03.720362 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-crc8g" podStartSLOduration=6.29377196 podStartE2EDuration="15.720320527s" podCreationTimestamp="2025-02-13 15:33:48 +0000 UTC" firstStartedPulling="2025-02-13 15:33:49.182377706 +0000 UTC m=+14.681829720" lastFinishedPulling="2025-02-13 15:33:58.608926033 +0000 UTC m=+24.108378287" observedRunningTime="2025-02-13 15:34:03.720038815 +0000 UTC m=+29.219490829" watchObservedRunningTime="2025-02-13 15:34:03.720320527 +0000 UTC m=+29.219772541" Feb 13 15:34:04.707790 kubelet[2595]: E0213 15:34:04.707759 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:05.017787 systemd-networkd[1373]: cilium_host: Link UP Feb 13 15:34:05.018477 systemd-networkd[1373]: cilium_net: Link UP Feb 13 15:34:05.019630 systemd-networkd[1373]: cilium_net: Gained carrier Feb 13 15:34:05.019821 systemd-networkd[1373]: cilium_host: Gained carrier Feb 13 15:34:05.019931 systemd-networkd[1373]: cilium_net: Gained IPv6LL Feb 13 15:34:05.020060 systemd-networkd[1373]: cilium_host: Gained IPv6LL Feb 13 15:34:05.102836 systemd-networkd[1373]: cilium_vxlan: Link UP Feb 13 15:34:05.102900 systemd-networkd[1373]: cilium_vxlan: Gained carrier Feb 13 15:34:05.410519 kernel: NET: Registered PF_ALG protocol family Feb 13 15:34:05.709865 kubelet[2595]: E0213 15:34:05.709726 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:05.953735 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:37590.service - OpenSSH per-connection server daemon (10.0.0.1:37590). Feb 13 15:34:06.009917 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 37590 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:06.011943 systemd-networkd[1373]: lxc_health: Link UP Feb 13 15:34:06.012164 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 15:34:06.015817 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:06.027703 systemd-logind[1427]: New session 9 of user core. Feb 13 15:34:06.036623 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:34:06.175455 sshd[3811]: Connection closed by 10.0.0.1 port 37590 Feb 13 15:34:06.176253 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:06.179151 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:37590.service: Deactivated successfully. Feb 13 15:34:06.183229 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:34:06.185030 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:34:06.186014 systemd-logind[1427]: Removed session 9. Feb 13 15:34:06.422545 systemd-networkd[1373]: lxc9ca2caacfce5: Link UP Feb 13 15:34:06.422703 systemd-networkd[1373]: lxcfc59f292fa7a: Link UP Feb 13 15:34:06.445410 kernel: eth0: renamed from tmp0a6c2 Feb 13 15:34:06.453633 kernel: eth0: renamed from tmp3bb02 Feb 13 15:34:06.473516 systemd-networkd[1373]: lxc9ca2caacfce5: Gained carrier Feb 13 15:34:06.477019 systemd-networkd[1373]: lxcfc59f292fa7a: Gained carrier Feb 13 15:34:06.936592 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Feb 13 15:34:07.108898 kubelet[2595]: E0213 15:34:07.108852 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.575515 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 15:34:07.712541 kubelet[2595]: E0213 15:34:07.712498 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:08.151629 systemd-networkd[1373]: lxc9ca2caacfce5: Gained IPv6LL Feb 13 15:34:08.152461 systemd-networkd[1373]: lxcfc59f292fa7a: Gained IPv6LL Feb 13 15:34:08.714789 kubelet[2595]: E0213 15:34:08.714736 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:10.080414 containerd[1452]: time="2025-02-13T15:34:10.078997173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:10.080414 containerd[1452]: time="2025-02-13T15:34:10.079070628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:10.080414 containerd[1452]: time="2025-02-13T15:34:10.079088671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.080414 containerd[1452]: time="2025-02-13T15:34:10.079171408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.096020 containerd[1452]: time="2025-02-13T15:34:10.095937495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:10.096198 containerd[1452]: time="2025-02-13T15:34:10.096030434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:10.096198 containerd[1452]: time="2025-02-13T15:34:10.096057919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.096198 containerd[1452]: time="2025-02-13T15:34:10.096153618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.100585 systemd[1]: Started cri-containerd-3bb025036ec2207516871aef282bacaa826b8fb5c180f611a734840bc2ca98a1.scope - libcontainer container 3bb025036ec2207516871aef282bacaa826b8fb5c180f611a734840bc2ca98a1. Feb 13 15:34:10.112071 systemd[1]: Started cri-containerd-0a6c2dcceede06b9a43b28539ad62d0d521290ef8d00f16144ac20ee26089117.scope - libcontainer container 0a6c2dcceede06b9a43b28539ad62d0d521290ef8d00f16144ac20ee26089117. Feb 13 15:34:10.117560 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:34:10.122854 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:34:10.141886 containerd[1452]: time="2025-02-13T15:34:10.141834684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pvpjh,Uid:fd772383-833b-4115-8f02-e2524c801722,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb025036ec2207516871aef282bacaa826b8fb5c180f611a734840bc2ca98a1\"" Feb 13 15:34:10.142879 kubelet[2595]: E0213 15:34:10.142708 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:10.143890 containerd[1452]: time="2025-02-13T15:34:10.143859205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ggnzt,Uid:423472c7-a04e-4ff4-bb7e-cf8788b3be42,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6c2dcceede06b9a43b28539ad62d0d521290ef8d00f16144ac20ee26089117\"" Feb 13 15:34:10.145121 kubelet[2595]: E0213 15:34:10.145104 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:10.147821 containerd[1452]: time="2025-02-13T15:34:10.147793466Z" level=info msg="CreateContainer within sandbox \"0a6c2dcceede06b9a43b28539ad62d0d521290ef8d00f16144ac20ee26089117\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:10.148323 containerd[1452]: time="2025-02-13T15:34:10.148295726Z" level=info msg="CreateContainer within sandbox \"3bb025036ec2207516871aef282bacaa826b8fb5c180f611a734840bc2ca98a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:10.165931 containerd[1452]: time="2025-02-13T15:34:10.165884937Z" level=info msg="CreateContainer within sandbox \"0a6c2dcceede06b9a43b28539ad62d0d521290ef8d00f16144ac20ee26089117\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c0d1a590f05f11024b93469091d509b71c914045023929fc169615a187882e3\"" Feb 13 15:34:10.166560 containerd[1452]: time="2025-02-13T15:34:10.166518542Z" level=info msg="CreateContainer within sandbox \"3bb025036ec2207516871aef282bacaa826b8fb5c180f611a734840bc2ca98a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"461371f09d82d929a50020259f4d4743b3431e8a65d734ae004d88fc7a03f637\"" Feb 13 15:34:10.167953 containerd[1452]: time="2025-02-13T15:34:10.167902577Z" level=info msg="StartContainer for \"461371f09d82d929a50020259f4d4743b3431e8a65d734ae004d88fc7a03f637\"" Feb 13 15:34:10.168526 containerd[1452]: time="2025-02-13T15:34:10.168491414Z" level=info msg="StartContainer for \"9c0d1a590f05f11024b93469091d509b71c914045023929fc169615a187882e3\"" Feb 13 15:34:10.195579 systemd[1]: Started cri-containerd-461371f09d82d929a50020259f4d4743b3431e8a65d734ae004d88fc7a03f637.scope - libcontainer container 461371f09d82d929a50020259f4d4743b3431e8a65d734ae004d88fc7a03f637. Feb 13 15:34:10.198618 systemd[1]: Started cri-containerd-9c0d1a590f05f11024b93469091d509b71c914045023929fc169615a187882e3.scope - libcontainer container 9c0d1a590f05f11024b93469091d509b71c914045023929fc169615a187882e3. Feb 13 15:34:10.229191 containerd[1452]: time="2025-02-13T15:34:10.229141490Z" level=info msg="StartContainer for \"9c0d1a590f05f11024b93469091d509b71c914045023929fc169615a187882e3\" returns successfully" Feb 13 15:34:10.234693 containerd[1452]: time="2025-02-13T15:34:10.234645902Z" level=info msg="StartContainer for \"461371f09d82d929a50020259f4d4743b3431e8a65d734ae004d88fc7a03f637\" returns successfully" Feb 13 15:34:10.719949 kubelet[2595]: E0213 15:34:10.719675 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:10.722744 kubelet[2595]: E0213 15:34:10.722652 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:10.731790 kubelet[2595]: I0213 15:34:10.731745 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ggnzt" podStartSLOduration=22.731706986 podStartE2EDuration="22.731706986s" podCreationTimestamp="2025-02-13 15:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:10.730917829 +0000 UTC m=+36.230369843" watchObservedRunningTime="2025-02-13 15:34:10.731706986 +0000 UTC m=+36.231159000" Feb 13 15:34:10.742158 kubelet[2595]: I0213 15:34:10.742115 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pvpjh" podStartSLOduration=22.742077284 podStartE2EDuration="22.742077284s" podCreationTimestamp="2025-02-13 15:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:10.741953659 +0000 UTC m=+36.241405633" watchObservedRunningTime="2025-02-13 15:34:10.742077284 +0000 UTC m=+36.241529258" Feb 13 15:34:11.085884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502812094.mount: Deactivated successfully. Feb 13 15:34:11.191343 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:37602.service - OpenSSH per-connection server daemon (10.0.0.1:37602). Feb 13 15:34:11.237999 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 37602 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:11.239559 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:11.243188 systemd-logind[1427]: New session 10 of user core. Feb 13 15:34:11.249562 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:34:11.367560 sshd[4033]: Connection closed by 10.0.0.1 port 37602 Feb 13 15:34:11.367946 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:11.371381 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:37602.service: Deactivated successfully. Feb 13 15:34:11.373039 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:34:11.373733 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:34:11.374638 systemd-logind[1427]: Removed session 10. Feb 13 15:34:11.724028 kubelet[2595]: E0213 15:34:11.723915 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:11.724929 kubelet[2595]: E0213 15:34:11.724899 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:12.726119 kubelet[2595]: E0213 15:34:12.725557 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:16.394692 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Feb 13 15:34:16.437845 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:16.438282 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:16.442397 systemd-logind[1427]: New session 11 of user core. Feb 13 15:34:16.452657 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:34:16.581604 sshd[4051]: Connection closed by 10.0.0.1 port 52988 Feb 13 15:34:16.583236 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:16.590254 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:52988.service: Deactivated successfully. Feb 13 15:34:16.592311 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:34:16.594429 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:34:16.607189 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:52996.service - OpenSSH per-connection server daemon (10.0.0.1:52996). Feb 13 15:34:16.609008 systemd-logind[1427]: Removed session 11. Feb 13 15:34:16.656196 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 52996 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:16.657553 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:16.662997 systemd-logind[1427]: New session 12 of user core. Feb 13 15:34:16.670579 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:34:16.835759 sshd[4067]: Connection closed by 10.0.0.1 port 52996 Feb 13 15:34:16.836124 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:16.849457 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:52996.service: Deactivated successfully. Feb 13 15:34:16.851067 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:34:16.856190 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:34:16.863742 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:53006.service - OpenSSH per-connection server daemon (10.0.0.1:53006). Feb 13 15:34:16.866476 systemd-logind[1427]: Removed session 12. Feb 13 15:34:16.903195 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 53006 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:16.904605 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:16.908481 systemd-logind[1427]: New session 13 of user core. Feb 13 15:34:16.917599 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:34:17.040878 sshd[4079]: Connection closed by 10.0.0.1 port 53006 Feb 13 15:34:17.041218 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:17.044052 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:53006.service: Deactivated successfully. Feb 13 15:34:17.045719 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:34:17.047099 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:34:17.047815 systemd-logind[1427]: Removed session 13. Feb 13 15:34:22.057249 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:53008.service - OpenSSH per-connection server daemon (10.0.0.1:53008). Feb 13 15:34:22.100451 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 53008 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:22.101137 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:22.105088 systemd-logind[1427]: New session 14 of user core. Feb 13 15:34:22.116567 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:34:22.227791 sshd[4098]: Connection closed by 10.0.0.1 port 53008 Feb 13 15:34:22.228267 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:22.232018 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:53008.service: Deactivated successfully. Feb 13 15:34:22.233625 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:34:22.236143 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:34:22.237305 systemd-logind[1427]: Removed session 14. Feb 13 15:34:27.239993 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:34744.service - OpenSSH per-connection server daemon (10.0.0.1:34744). Feb 13 15:34:27.287338 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 34744 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:27.287831 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:27.292193 systemd-logind[1427]: New session 15 of user core. Feb 13 15:34:27.304531 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:34:27.427924 sshd[4113]: Connection closed by 10.0.0.1 port 34744 Feb 13 15:34:27.428289 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:27.437861 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:34744.service: Deactivated successfully. Feb 13 15:34:27.439353 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:34:27.440653 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:34:27.446632 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:34752.service - OpenSSH per-connection server daemon (10.0.0.1:34752). Feb 13 15:34:27.447512 systemd-logind[1427]: Removed session 15. Feb 13 15:34:27.486901 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 34752 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:27.488285 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:27.494540 systemd-logind[1427]: New session 16 of user core. Feb 13 15:34:27.506537 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:34:27.718009 sshd[4127]: Connection closed by 10.0.0.1 port 34752 Feb 13 15:34:27.718830 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:27.727827 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:34752.service: Deactivated successfully. Feb 13 15:34:27.729459 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:34:27.731153 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:34:27.741905 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:34764.service - OpenSSH per-connection server daemon (10.0.0.1:34764). Feb 13 15:34:27.742797 systemd-logind[1427]: Removed session 16. Feb 13 15:34:27.784181 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 34764 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:27.785728 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:27.790591 systemd-logind[1427]: New session 17 of user core. Feb 13 15:34:27.799544 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:34:29.013496 sshd[4140]: Connection closed by 10.0.0.1 port 34764 Feb 13 15:34:29.014579 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:29.023957 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:34764.service: Deactivated successfully. Feb 13 15:34:29.025558 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:34:29.026856 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:34:29.032699 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:34778.service - OpenSSH per-connection server daemon (10.0.0.1:34778). Feb 13 15:34:29.036293 systemd-logind[1427]: Removed session 17. Feb 13 15:34:29.073244 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 34778 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:29.074876 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:29.078539 systemd-logind[1427]: New session 18 of user core. Feb 13 15:34:29.090557 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:34:29.308930 sshd[4160]: Connection closed by 10.0.0.1 port 34778 Feb 13 15:34:29.307703 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:29.319977 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:34778.service: Deactivated successfully. Feb 13 15:34:29.321621 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:34:29.324649 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:34:29.338842 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Feb 13 15:34:29.340091 systemd-logind[1427]: Removed session 18. Feb 13 15:34:29.377923 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:29.379181 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:29.389228 systemd-logind[1427]: New session 19 of user core. Feb 13 15:34:29.395550 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:34:29.510829 sshd[4172]: Connection closed by 10.0.0.1 port 34780 Feb 13 15:34:29.511187 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:29.514767 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:34780.service: Deactivated successfully. Feb 13 15:34:29.516967 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:34:29.518470 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:34:29.519284 systemd-logind[1427]: Removed session 19. Feb 13 15:34:34.527976 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:60902.service - OpenSSH per-connection server daemon (10.0.0.1:60902). Feb 13 15:34:34.572218 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 60902 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:34.572202 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:34.578739 systemd-logind[1427]: New session 20 of user core. Feb 13 15:34:34.586586 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:34:34.697960 sshd[4189]: Connection closed by 10.0.0.1 port 60902 Feb 13 15:34:34.697533 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:34.701126 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:60902.service: Deactivated successfully. Feb 13 15:34:34.703866 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:34:34.704479 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:34:34.705332 systemd-logind[1427]: Removed session 20. Feb 13 15:34:39.711093 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:60904.service - OpenSSH per-connection server daemon (10.0.0.1:60904). Feb 13 15:34:39.761037 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 60904 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:39.762375 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:39.766477 systemd-logind[1427]: New session 21 of user core. Feb 13 15:34:39.774586 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:34:39.901447 sshd[4205]: Connection closed by 10.0.0.1 port 60904 Feb 13 15:34:39.900373 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:39.903232 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:60904.service: Deactivated successfully. Feb 13 15:34:39.906249 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:34:39.908044 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:34:39.909191 systemd-logind[1427]: Removed session 21. Feb 13 15:34:44.922948 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:40186.service - OpenSSH per-connection server daemon (10.0.0.1:40186). Feb 13 15:34:44.971721 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 40186 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:44.973554 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:44.979947 systemd-logind[1427]: New session 22 of user core. Feb 13 15:34:44.992558 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:34:45.124417 sshd[4220]: Connection closed by 10.0.0.1 port 40186 Feb 13 15:34:45.124929 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:45.136134 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:40186.service: Deactivated successfully. Feb 13 15:34:45.138015 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:34:45.140445 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:34:45.141773 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Feb 13 15:34:45.143769 systemd-logind[1427]: Removed session 22. Feb 13 15:34:45.185820 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:45.187114 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:45.191106 systemd-logind[1427]: New session 23 of user core. Feb 13 15:34:45.200550 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:34:46.607445 kubelet[2595]: E0213 15:34:46.607337 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:47.606728 kubelet[2595]: E0213 15:34:47.606682 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:47.727612 containerd[1452]: time="2025-02-13T15:34:47.727571553Z" level=info msg="StopContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" with timeout 30 (s)" Feb 13 15:34:47.728279 containerd[1452]: time="2025-02-13T15:34:47.728249091Z" level=info msg="Stop container \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" with signal terminated" Feb 13 15:34:47.738939 containerd[1452]: time="2025-02-13T15:34:47.737378659Z" level=info msg="StopContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" with timeout 2 (s)" Feb 13 15:34:47.738939 containerd[1452]: time="2025-02-13T15:34:47.737712829Z" level=info msg="Stop container \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" with signal terminated" Feb 13 15:34:47.738939 containerd[1452]: time="2025-02-13T15:34:47.738427764Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:47.741556 systemd[1]: cri-containerd-e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936.scope: Deactivated successfully. Feb 13 15:34:47.748504 systemd-networkd[1373]: lxc_health: Link DOWN Feb 13 15:34:47.748511 systemd-networkd[1373]: lxc_health: Lost carrier Feb 13 15:34:47.784585 systemd[1]: cri-containerd-d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69.scope: Deactivated successfully. Feb 13 15:34:47.784843 systemd[1]: cri-containerd-d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69.scope: Consumed 6.615s CPU time. Feb 13 15:34:47.792312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936-rootfs.mount: Deactivated successfully. Feb 13 15:34:47.804439 containerd[1452]: time="2025-02-13T15:34:47.804339759Z" level=info msg="shim disconnected" id=e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936 namespace=k8s.io Feb 13 15:34:47.804439 containerd[1452]: time="2025-02-13T15:34:47.804436950Z" level=warning msg="cleaning up after shim disconnected" id=e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936 namespace=k8s.io Feb 13 15:34:47.804632 containerd[1452]: time="2025-02-13T15:34:47.804448269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:47.806889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69-rootfs.mount: Deactivated successfully. Feb 13 15:34:47.813058 containerd[1452]: time="2025-02-13T15:34:47.812989091Z" level=info msg="shim disconnected" id=d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69 namespace=k8s.io Feb 13 15:34:47.813058 containerd[1452]: time="2025-02-13T15:34:47.813049086Z" level=warning msg="cleaning up after shim disconnected" id=d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69 namespace=k8s.io Feb 13 15:34:47.813058 containerd[1452]: time="2025-02-13T15:34:47.813060165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:47.846544 containerd[1452]: time="2025-02-13T15:34:47.846485760Z" level=info msg="StopContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" returns successfully" Feb 13 15:34:47.849979 containerd[1452]: time="2025-02-13T15:34:47.849923766Z" level=info msg="StopContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" returns successfully" Feb 13 15:34:47.851432 containerd[1452]: time="2025-02-13T15:34:47.850531911Z" level=info msg="StopPodSandbox for \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\"" Feb 13 15:34:47.851432 containerd[1452]: time="2025-02-13T15:34:47.850661619Z" level=info msg="StopPodSandbox for \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\"" Feb 13 15:34:47.853945 containerd[1452]: time="2025-02-13T15:34:47.853421608Z" level=info msg="Container to stop \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.853945 containerd[1452]: time="2025-02-13T15:34:47.853479523Z" level=info msg="Container to stop \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.853945 containerd[1452]: time="2025-02-13T15:34:47.853492401Z" level=info msg="Container to stop \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.853945 containerd[1452]: time="2025-02-13T15:34:47.853502320Z" level=info msg="Container to stop \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.853945 containerd[1452]: time="2025-02-13T15:34:47.853511000Z" level=info msg="Container to stop \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.856236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8-shm.mount: Deactivated successfully. Feb 13 15:34:47.860010 containerd[1452]: time="2025-02-13T15:34:47.859968851Z" level=info msg="Container to stop \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:47.863138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c-shm.mount: Deactivated successfully. Feb 13 15:34:47.863943 systemd[1]: cri-containerd-c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8.scope: Deactivated successfully. Feb 13 15:34:47.868807 systemd[1]: cri-containerd-241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c.scope: Deactivated successfully. Feb 13 15:34:47.899281 containerd[1452]: time="2025-02-13T15:34:47.899126004Z" level=info msg="shim disconnected" id=241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c namespace=k8s.io Feb 13 15:34:47.899281 containerd[1452]: time="2025-02-13T15:34:47.899275550Z" level=warning msg="cleaning up after shim disconnected" id=241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c namespace=k8s.io Feb 13 15:34:47.899281 containerd[1452]: time="2025-02-13T15:34:47.899285510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:47.899644 containerd[1452]: time="2025-02-13T15:34:47.899193598Z" level=info msg="shim disconnected" id=c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8 namespace=k8s.io Feb 13 15:34:47.899644 containerd[1452]: time="2025-02-13T15:34:47.899624119Z" level=warning msg="cleaning up after shim disconnected" id=c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8 namespace=k8s.io Feb 13 15:34:47.899644 containerd[1452]: time="2025-02-13T15:34:47.899632078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:47.912549 containerd[1452]: time="2025-02-13T15:34:47.912504825Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:47.913775 containerd[1452]: time="2025-02-13T15:34:47.913646921Z" level=info msg="TearDown network for sandbox \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" successfully" Feb 13 15:34:47.913775 containerd[1452]: time="2025-02-13T15:34:47.913671639Z" level=info msg="StopPodSandbox for \"c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8\" returns successfully" Feb 13 15:34:47.915040 containerd[1452]: time="2025-02-13T15:34:47.914998518Z" level=info msg="TearDown network for sandbox \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\" successfully" Feb 13 15:34:47.915040 containerd[1452]: time="2025-02-13T15:34:47.915028275Z" level=info msg="StopPodSandbox for \"241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c\" returns successfully" Feb 13 15:34:48.042279 kubelet[2595]: I0213 15:34:48.042235 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-cilium-config-path\") pod \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\" (UID: \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\") " Feb 13 15:34:48.042279 kubelet[2595]: I0213 15:34:48.042283 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-net\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042307 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-hubble-tls\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042328 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-etc-cni-netd\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042349 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-config-path\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042368 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-cgroup\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042384 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-lib-modules\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.042989 kubelet[2595]: I0213 15:34:48.042428 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-kernel\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042451 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psk45\" (UniqueName: \"kubernetes.io/projected/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-kube-api-access-psk45\") pod \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\" (UID: \"ce7691a5-4d42-47cf-b12b-e4016c5ee3a7\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042472 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24af0153-5005-4f62-b880-72fc5025b2c2-clustermesh-secrets\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042490 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-run\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042509 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkrw\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-kube-api-access-gzkrw\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042528 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-xtables-lock\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044495 kubelet[2595]: I0213 15:34:48.042569 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cni-path\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044646 kubelet[2595]: I0213 15:34:48.042585 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-bpf-maps\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.044646 kubelet[2595]: I0213 15:34:48.042605 2595 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-hostproc\") pod \"24af0153-5005-4f62-b880-72fc5025b2c2\" (UID: \"24af0153-5005-4f62-b880-72fc5025b2c2\") " Feb 13 15:34:48.046297 kubelet[2595]: I0213 15:34:48.045464 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.046297 kubelet[2595]: I0213 15:34:48.045736 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.046297 kubelet[2595]: I0213 15:34:48.046008 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.046297 kubelet[2595]: I0213 15:34:48.046058 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048590 kubelet[2595]: I0213 15:34:48.048541 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" (UID: "ce7691a5-4d42-47cf-b12b-e4016c5ee3a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:48.048659 kubelet[2595]: I0213 15:34:48.048604 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048659 kubelet[2595]: I0213 15:34:48.048630 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048659 kubelet[2595]: I0213 15:34:48.048648 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048737 kubelet[2595]: I0213 15:34:48.048677 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048737 kubelet[2595]: I0213 15:34:48.048693 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.048737 kubelet[2595]: I0213 15:34:48.048710 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:48.049747 kubelet[2595]: I0213 15:34:48.049499 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af0153-5005-4f62-b880-72fc5025b2c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:34:48.050409 kubelet[2595]: I0213 15:34:48.050343 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:48.051743 kubelet[2595]: I0213 15:34:48.051678 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-kube-api-access-psk45" (OuterVolumeSpecName: "kube-api-access-psk45") pod "ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" (UID: "ce7691a5-4d42-47cf-b12b-e4016c5ee3a7"). InnerVolumeSpecName "kube-api-access-psk45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:48.052404 kubelet[2595]: I0213 15:34:48.052219 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-kube-api-access-gzkrw" (OuterVolumeSpecName: "kube-api-access-gzkrw") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "kube-api-access-gzkrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:48.052596 kubelet[2595]: I0213 15:34:48.052558 2595 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "24af0153-5005-4f62-b880-72fc5025b2c2" (UID: "24af0153-5005-4f62-b880-72fc5025b2c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:48.143589 kubelet[2595]: I0213 15:34:48.143377 2595 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143589 kubelet[2595]: I0213 15:34:48.143508 2595 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143589 kubelet[2595]: I0213 15:34:48.143523 2595 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143589 kubelet[2595]: I0213 15:34:48.143539 2595 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-psk45\" (UniqueName: \"kubernetes.io/projected/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-kube-api-access-psk45\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143589 kubelet[2595]: I0213 15:34:48.143551 2595 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24af0153-5005-4f62-b880-72fc5025b2c2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143560 2595 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143810 2595 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gzkrw\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-kube-api-access-gzkrw\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143821 2595 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143830 2595 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143839 2595 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143847 2595 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143856 2595 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.143924 kubelet[2595]: I0213 15:34:48.143876 2595 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24af0153-5005-4f62-b880-72fc5025b2c2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.144108 kubelet[2595]: I0213 15:34:48.143885 2595 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24af0153-5005-4f62-b880-72fc5025b2c2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.144108 kubelet[2595]: I0213 15:34:48.143897 2595 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24af0153-5005-4f62-b880-72fc5025b2c2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.144108 kubelet[2595]: I0213 15:34:48.143908 2595 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:48.612554 systemd[1]: Removed slice kubepods-burstable-pod24af0153_5005_4f62_b880_72fc5025b2c2.slice - libcontainer container kubepods-burstable-pod24af0153_5005_4f62_b880_72fc5025b2c2.slice. Feb 13 15:34:48.612636 systemd[1]: kubepods-burstable-pod24af0153_5005_4f62_b880_72fc5025b2c2.slice: Consumed 6.785s CPU time. Feb 13 15:34:48.615051 systemd[1]: Removed slice kubepods-besteffort-podce7691a5_4d42_47cf_b12b_e4016c5ee3a7.slice - libcontainer container kubepods-besteffort-podce7691a5_4d42_47cf_b12b_e4016c5ee3a7.slice. Feb 13 15:34:48.713806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-241c338822b5ee00edbb32dfe81368fc3699b488589773ff8f9ae1a03d25034c-rootfs.mount: Deactivated successfully. Feb 13 15:34:48.713895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c105f9a3a189cb2b087af48e50e6676d87a04ff1547c2e1b03f9db5b046678b8-rootfs.mount: Deactivated successfully. Feb 13 15:34:48.713949 systemd[1]: var-lib-kubelet-pods-ce7691a5\x2d4d42\x2d47cf\x2db12b\x2de4016c5ee3a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsk45.mount: Deactivated successfully. Feb 13 15:34:48.714015 systemd[1]: var-lib-kubelet-pods-24af0153\x2d5005\x2d4f62\x2db880\x2d72fc5025b2c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgzkrw.mount: Deactivated successfully. Feb 13 15:34:48.714065 systemd[1]: var-lib-kubelet-pods-24af0153\x2d5005\x2d4f62\x2db880\x2d72fc5025b2c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:34:48.714112 systemd[1]: var-lib-kubelet-pods-24af0153\x2d5005\x2d4f62\x2db880\x2d72fc5025b2c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:34:48.834128 kubelet[2595]: I0213 15:34:48.833776 2595 scope.go:117] "RemoveContainer" containerID="e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936" Feb 13 15:34:48.835135 containerd[1452]: time="2025-02-13T15:34:48.835081728Z" level=info msg="RemoveContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\"" Feb 13 15:34:48.838346 containerd[1452]: time="2025-02-13T15:34:48.838296452Z" level=info msg="RemoveContainer for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" returns successfully" Feb 13 15:34:48.838603 kubelet[2595]: I0213 15:34:48.838570 2595 scope.go:117] "RemoveContainer" containerID="e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936" Feb 13 15:34:48.838812 containerd[1452]: time="2025-02-13T15:34:48.838771811Z" level=error msg="ContainerStatus for \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\": not found" Feb 13 15:34:48.838939 kubelet[2595]: E0213 15:34:48.838922 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\": not found" containerID="e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936" Feb 13 15:34:48.846424 kubelet[2595]: I0213 15:34:48.844974 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936"} err="failed to get container status \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9c0959a2451bfc88afbcb9fc662c4fd5db7b10a6cec1f51716ccb7e88a43936\": not found" Feb 13 15:34:48.846424 kubelet[2595]: I0213 15:34:48.845344 2595 scope.go:117] "RemoveContainer" containerID="d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69" Feb 13 15:34:48.847298 containerd[1452]: time="2025-02-13T15:34:48.847036901Z" level=info msg="RemoveContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\"" Feb 13 15:34:48.856283 containerd[1452]: time="2025-02-13T15:34:48.856216512Z" level=info msg="RemoveContainer for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" returns successfully" Feb 13 15:34:48.856636 kubelet[2595]: I0213 15:34:48.856599 2595 scope.go:117] "RemoveContainer" containerID="6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae" Feb 13 15:34:48.858567 containerd[1452]: time="2025-02-13T15:34:48.858530233Z" level=info msg="RemoveContainer for \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\"" Feb 13 15:34:48.865402 containerd[1452]: time="2025-02-13T15:34:48.865276893Z" level=info msg="RemoveContainer for \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\" returns successfully" Feb 13 15:34:48.865568 kubelet[2595]: I0213 15:34:48.865478 2595 scope.go:117] "RemoveContainer" containerID="326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db" Feb 13 15:34:48.866956 containerd[1452]: time="2025-02-13T15:34:48.866921032Z" level=info msg="RemoveContainer for \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\"" Feb 13 15:34:48.870651 containerd[1452]: time="2025-02-13T15:34:48.870612115Z" level=info msg="RemoveContainer for \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\" returns successfully" Feb 13 15:34:48.870835 kubelet[2595]: I0213 15:34:48.870807 2595 scope.go:117] "RemoveContainer" containerID="f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e" Feb 13 15:34:48.871910 containerd[1452]: time="2025-02-13T15:34:48.871886845Z" level=info msg="RemoveContainer for \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\"" Feb 13 15:34:48.874360 containerd[1452]: time="2025-02-13T15:34:48.874325316Z" level=info msg="RemoveContainer for \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\" returns successfully" Feb 13 15:34:48.874521 kubelet[2595]: I0213 15:34:48.874501 2595 scope.go:117] "RemoveContainer" containerID="aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150" Feb 13 15:34:48.875715 containerd[1452]: time="2025-02-13T15:34:48.875689439Z" level=info msg="RemoveContainer for \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\"" Feb 13 15:34:48.878898 containerd[1452]: time="2025-02-13T15:34:48.878805891Z" level=info msg="RemoveContainer for \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\" returns successfully" Feb 13 15:34:48.879016 kubelet[2595]: I0213 15:34:48.878987 2595 scope.go:117] "RemoveContainer" containerID="d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69" Feb 13 15:34:48.879242 containerd[1452]: time="2025-02-13T15:34:48.879173379Z" level=error msg="ContainerStatus for \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\": not found" Feb 13 15:34:48.880098 kubelet[2595]: E0213 15:34:48.880071 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\": not found" containerID="d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69" Feb 13 15:34:48.880143 kubelet[2595]: I0213 15:34:48.880114 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69"} err="failed to get container status \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4ef1dc131d587b42d25bfcde0ed03088ab236a3aa739b72ca1c6caf107bac69\": not found" Feb 13 15:34:48.880143 kubelet[2595]: I0213 15:34:48.880125 2595 scope.go:117] "RemoveContainer" containerID="6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae" Feb 13 15:34:48.880458 containerd[1452]: time="2025-02-13T15:34:48.880338479Z" level=error msg="ContainerStatus for \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\": not found" Feb 13 15:34:48.880525 kubelet[2595]: E0213 15:34:48.880516 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\": not found" containerID="6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae" Feb 13 15:34:48.880576 kubelet[2595]: I0213 15:34:48.880561 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae"} err="failed to get container status \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f8b1d50326fa949d12ee231722bae4e6d3f6cb1000975884dfe1de358d36bae\": not found" Feb 13 15:34:48.880604 kubelet[2595]: I0213 15:34:48.880577 2595 scope.go:117] "RemoveContainer" containerID="326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db" Feb 13 15:34:48.880751 containerd[1452]: time="2025-02-13T15:34:48.880723366Z" level=error msg="ContainerStatus for \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\": not found" Feb 13 15:34:48.881100 kubelet[2595]: E0213 15:34:48.881072 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\": not found" containerID="326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db" Feb 13 15:34:48.881100 kubelet[2595]: I0213 15:34:48.881105 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db"} err="failed to get container status \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\": rpc error: code = NotFound desc = an error occurred when try to find container \"326748351b994c4ca9bfe67ff3f6a1a08212cc69d78ab164c8a9de94bd5045db\": not found" Feb 13 15:34:48.881176 kubelet[2595]: I0213 15:34:48.881115 2595 scope.go:117] "RemoveContainer" containerID="f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e" Feb 13 15:34:48.881350 containerd[1452]: time="2025-02-13T15:34:48.881302356Z" level=error msg="ContainerStatus for \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\": not found" Feb 13 15:34:48.881714 kubelet[2595]: E0213 15:34:48.881700 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\": not found" containerID="f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e" Feb 13 15:34:48.881803 kubelet[2595]: I0213 15:34:48.881732 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e"} err="failed to get container status \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0846489f19afc79110b87ee6313dc95a8980127eb9b3643a1c7b55520f4a74e\": not found" Feb 13 15:34:48.881803 kubelet[2595]: I0213 15:34:48.881742 2595 scope.go:117] "RemoveContainer" containerID="aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150" Feb 13 15:34:48.882053 containerd[1452]: time="2025-02-13T15:34:48.881967499Z" level=error msg="ContainerStatus for \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\": not found" Feb 13 15:34:48.882134 kubelet[2595]: E0213 15:34:48.882110 2595 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\": not found" containerID="aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150" Feb 13 15:34:48.882193 kubelet[2595]: I0213 15:34:48.882142 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150"} err="failed to get container status \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\": rpc error: code = NotFound desc = an error occurred when try to find container \"aefff1fbded76538a9758fc7821eae4b5b44daa9a58f7b64950ee4e9fd6a2150\": not found" Feb 13 15:34:49.654800 sshd[4234]: Connection closed by 10.0.0.1 port 40190 Feb 13 15:34:49.656119 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:49.664680 kubelet[2595]: E0213 15:34:49.664650 2595 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:34:49.666107 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:40190.service: Deactivated successfully. Feb 13 15:34:49.668853 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:34:49.669074 systemd[1]: session-23.scope: Consumed 1.823s CPU time. Feb 13 15:34:49.670356 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:34:49.682674 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:40206.service - OpenSSH per-connection server daemon (10.0.0.1:40206). Feb 13 15:34:49.683844 systemd-logind[1427]: Removed session 23. Feb 13 15:34:49.722131 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 40206 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:49.723237 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:49.728213 systemd-logind[1427]: New session 24 of user core. Feb 13 15:34:49.736526 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:34:50.359426 sshd[4397]: Connection closed by 10.0.0.1 port 40206 Feb 13 15:34:50.360003 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.371087 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:40206.service: Deactivated successfully. Feb 13 15:34:50.377051 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:34:50.380924 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:34:50.395922 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:40210.service - OpenSSH per-connection server daemon (10.0.0.1:40210). Feb 13 15:34:50.399734 kubelet[2595]: I0213 15:34:50.399693 2595 topology_manager.go:215] "Topology Admit Handler" podUID="f24a8807-fba7-4164-99b2-3e71a912af66" podNamespace="kube-system" podName="cilium-cz474" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399750 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="mount-bpf-fs" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399761 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="mount-cgroup" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399768 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="apply-sysctl-overwrites" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399775 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" containerName="cilium-operator" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399782 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="clean-cilium-state" Feb 13 15:34:50.399864 kubelet[2595]: E0213 15:34:50.399789 2595 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="cilium-agent" Feb 13 15:34:50.399864 kubelet[2595]: I0213 15:34:50.399813 2595 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" containerName="cilium-operator" Feb 13 15:34:50.399864 kubelet[2595]: I0213 15:34:50.399820 2595 memory_manager.go:354] "RemoveStaleState removing state" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" containerName="cilium-agent" Feb 13 15:34:50.402547 systemd-logind[1427]: Removed session 24. Feb 13 15:34:50.412994 systemd[1]: Created slice kubepods-burstable-podf24a8807_fba7_4164_99b2_3e71a912af66.slice - libcontainer container kubepods-burstable-podf24a8807_fba7_4164_99b2_3e71a912af66.slice. Feb 13 15:34:50.457663 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 40210 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458135 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-cilium-run\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458185 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-cilium-cgroup\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458205 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-lib-modules\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458225 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-hostproc\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458243 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-cni-path\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458494 kubelet[2595]: I0213 15:34:50.458262 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f24a8807-fba7-4164-99b2-3e71a912af66-clustermesh-secrets\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458281 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f24a8807-fba7-4164-99b2-3e71a912af66-cilium-ipsec-secrets\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458298 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f24a8807-fba7-4164-99b2-3e71a912af66-hubble-tls\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458320 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-bpf-maps\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458337 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-etc-cni-netd\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458359 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-host-proc-sys-kernel\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458674 kubelet[2595]: I0213 15:34:50.458379 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxqlw\" (UniqueName: \"kubernetes.io/projected/f24a8807-fba7-4164-99b2-3e71a912af66-kube-api-access-cxqlw\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458790 kubelet[2595]: I0213 15:34:50.458422 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f24a8807-fba7-4164-99b2-3e71a912af66-cilium-config-path\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458790 kubelet[2595]: I0213 15:34:50.458444 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-xtables-lock\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.458790 kubelet[2595]: I0213 15:34:50.458467 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f24a8807-fba7-4164-99b2-3e71a912af66-host-proc-sys-net\") pod \"cilium-cz474\" (UID: \"f24a8807-fba7-4164-99b2-3e71a912af66\") " pod="kube-system/cilium-cz474" Feb 13 15:34:50.459330 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.463044 systemd-logind[1427]: New session 25 of user core. Feb 13 15:34:50.470575 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:34:50.520243 sshd[4410]: Connection closed by 10.0.0.1 port 40210 Feb 13 15:34:50.520759 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.538076 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:40210.service: Deactivated successfully. Feb 13 15:34:50.539768 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:34:50.541196 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:34:50.553815 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:40218.service - OpenSSH per-connection server daemon (10.0.0.1:40218). Feb 13 15:34:50.554763 systemd-logind[1427]: Removed session 25. Feb 13 15:34:50.596869 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 40218 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:34:50.598149 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.602462 systemd-logind[1427]: New session 26 of user core. Feb 13 15:34:50.608893 kubelet[2595]: I0213 15:34:50.608864 2595 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="24af0153-5005-4f62-b880-72fc5025b2c2" path="/var/lib/kubelet/pods/24af0153-5005-4f62-b880-72fc5025b2c2/volumes" Feb 13 15:34:50.609549 kubelet[2595]: I0213 15:34:50.609438 2595 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce7691a5-4d42-47cf-b12b-e4016c5ee3a7" path="/var/lib/kubelet/pods/ce7691a5-4d42-47cf-b12b-e4016c5ee3a7/volumes" Feb 13 15:34:50.612596 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:34:50.721858 kubelet[2595]: E0213 15:34:50.720166 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:50.725890 containerd[1452]: time="2025-02-13T15:34:50.720717644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cz474,Uid:f24a8807-fba7-4164-99b2-3e71a912af66,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:50.750470 containerd[1452]: time="2025-02-13T15:34:50.750291155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:50.750470 containerd[1452]: time="2025-02-13T15:34:50.750342671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:50.750470 containerd[1452]: time="2025-02-13T15:34:50.750353550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:50.750470 containerd[1452]: time="2025-02-13T15:34:50.750451342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:50.779600 systemd[1]: Started cri-containerd-13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb.scope - libcontainer container 13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb. Feb 13 15:34:50.803985 containerd[1452]: time="2025-02-13T15:34:50.803931474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cz474,Uid:f24a8807-fba7-4164-99b2-3e71a912af66,Namespace:kube-system,Attempt:0,} returns sandbox id \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\"" Feb 13 15:34:50.804664 kubelet[2595]: E0213 15:34:50.804643 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:50.806748 containerd[1452]: time="2025-02-13T15:34:50.806713662Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:34:50.816741 containerd[1452]: time="2025-02-13T15:34:50.816690623Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a\"" Feb 13 15:34:50.817407 containerd[1452]: time="2025-02-13T15:34:50.817363532Z" level=info msg="StartContainer for \"e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a\"" Feb 13 15:34:50.841557 systemd[1]: Started cri-containerd-e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a.scope - libcontainer container e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a. Feb 13 15:34:50.865588 containerd[1452]: time="2025-02-13T15:34:50.865110540Z" level=info msg="StartContainer for \"e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a\" returns successfully" Feb 13 15:34:50.883756 systemd[1]: cri-containerd-e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a.scope: Deactivated successfully. Feb 13 15:34:50.917367 containerd[1452]: time="2025-02-13T15:34:50.917308569Z" level=info msg="shim disconnected" id=e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a namespace=k8s.io Feb 13 15:34:50.917940 containerd[1452]: time="2025-02-13T15:34:50.917749376Z" level=warning msg="cleaning up after shim disconnected" id=e69e53768f273ede637718f5d39bea9fb0ab9923953b6b381a2dc30bbac59f5a namespace=k8s.io Feb 13 15:34:50.917940 containerd[1452]: time="2025-02-13T15:34:50.917766215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:51.840427 kubelet[2595]: E0213 15:34:51.840284 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:51.843566 containerd[1452]: time="2025-02-13T15:34:51.842979673Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:34:51.851118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370114702.mount: Deactivated successfully. Feb 13 15:34:51.861623 containerd[1452]: time="2025-02-13T15:34:51.861585705Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada\"" Feb 13 15:34:51.862036 containerd[1452]: time="2025-02-13T15:34:51.862006035Z" level=info msg="StartContainer for \"3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada\"" Feb 13 15:34:51.890538 systemd[1]: Started cri-containerd-3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada.scope - libcontainer container 3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada. Feb 13 15:34:51.908710 containerd[1452]: time="2025-02-13T15:34:51.908671424Z" level=info msg="StartContainer for \"3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada\" returns successfully" Feb 13 15:34:51.922373 systemd[1]: cri-containerd-3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada.scope: Deactivated successfully. Feb 13 15:34:51.940640 containerd[1452]: time="2025-02-13T15:34:51.940575627Z" level=info msg="shim disconnected" id=3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada namespace=k8s.io Feb 13 15:34:51.940640 containerd[1452]: time="2025-02-13T15:34:51.940629823Z" level=warning msg="cleaning up after shim disconnected" id=3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada namespace=k8s.io Feb 13 15:34:51.940640 containerd[1452]: time="2025-02-13T15:34:51.940637863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:52.565378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c1829921ed0a85af81f9071ab03238ad8f539bb49eeaa9666f41a8cb5884ada-rootfs.mount: Deactivated successfully. Feb 13 15:34:52.843846 kubelet[2595]: E0213 15:34:52.843582 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:52.845426 containerd[1452]: time="2025-02-13T15:34:52.845350299Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:34:52.859788 containerd[1452]: time="2025-02-13T15:34:52.859747777Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668\"" Feb 13 15:34:52.860362 containerd[1452]: time="2025-02-13T15:34:52.860339058Z" level=info msg="StartContainer for \"185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668\"" Feb 13 15:34:52.890591 systemd[1]: Started cri-containerd-185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668.scope - libcontainer container 185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668. Feb 13 15:34:52.914769 systemd[1]: cri-containerd-185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668.scope: Deactivated successfully. Feb 13 15:34:52.916770 containerd[1452]: time="2025-02-13T15:34:52.916639536Z" level=info msg="StartContainer for \"185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668\" returns successfully" Feb 13 15:34:52.935982 containerd[1452]: time="2025-02-13T15:34:52.935928007Z" level=info msg="shim disconnected" id=185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668 namespace=k8s.io Feb 13 15:34:52.935982 containerd[1452]: time="2025-02-13T15:34:52.935980164Z" level=warning msg="cleaning up after shim disconnected" id=185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668 namespace=k8s.io Feb 13 15:34:52.936155 containerd[1452]: time="2025-02-13T15:34:52.935989363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:53.565473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-185d3f7fb4346d2fa1da65615a38dd3ee696d77f7c2655f17c3b58086ec79668-rootfs.mount: Deactivated successfully. Feb 13 15:34:53.847917 kubelet[2595]: E0213 15:34:53.847888 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.850628 containerd[1452]: time="2025-02-13T15:34:53.850248582Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:34:53.864830 containerd[1452]: time="2025-02-13T15:34:53.864442736Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6\"" Feb 13 15:34:53.865777 containerd[1452]: time="2025-02-13T15:34:53.865679699Z" level=info msg="StartContainer for \"f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6\"" Feb 13 15:34:53.900598 systemd[1]: Started cri-containerd-f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6.scope - libcontainer container f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6. Feb 13 15:34:53.920979 systemd[1]: cri-containerd-f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6.scope: Deactivated successfully. Feb 13 15:34:53.923437 containerd[1452]: time="2025-02-13T15:34:53.923384858Z" level=info msg="StartContainer for \"f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6\" returns successfully" Feb 13 15:34:53.940882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6-rootfs.mount: Deactivated successfully. Feb 13 15:34:53.945416 containerd[1452]: time="2025-02-13T15:34:53.945352407Z" level=info msg="shim disconnected" id=f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6 namespace=k8s.io Feb 13 15:34:53.945416 containerd[1452]: time="2025-02-13T15:34:53.945413563Z" level=warning msg="cleaning up after shim disconnected" id=f9c3d09747a6bba8c33b3d09f6b4fe3951f72ae331d253ed0c36b14c7c47fdc6 namespace=k8s.io Feb 13 15:34:53.945416 containerd[1452]: time="2025-02-13T15:34:53.945421763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:54.665475 kubelet[2595]: E0213 15:34:54.665444 2595 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:34:54.851409 kubelet[2595]: E0213 15:34:54.851351 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:54.854653 containerd[1452]: time="2025-02-13T15:34:54.854589834Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:34:54.866819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173275917.mount: Deactivated successfully. Feb 13 15:34:54.869706 containerd[1452]: time="2025-02-13T15:34:54.869655398Z" level=info msg="CreateContainer within sandbox \"13835dda2ce637a4bb10da71196d700b572e14d42ce6c9aadc716413d485cbcb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a298d0dca6f75126ae8c57df71399753a5272f1055ef317fd70110c5476cb094\"" Feb 13 15:34:54.870997 containerd[1452]: time="2025-02-13T15:34:54.870621862Z" level=info msg="StartContainer for \"a298d0dca6f75126ae8c57df71399753a5272f1055ef317fd70110c5476cb094\"" Feb 13 15:34:54.902771 systemd[1]: Started cri-containerd-a298d0dca6f75126ae8c57df71399753a5272f1055ef317fd70110c5476cb094.scope - libcontainer container a298d0dca6f75126ae8c57df71399753a5272f1055ef317fd70110c5476cb094. Feb 13 15:34:54.932344 containerd[1452]: time="2025-02-13T15:34:54.932240719Z" level=info msg="StartContainer for \"a298d0dca6f75126ae8c57df71399753a5272f1055ef317fd70110c5476cb094\" returns successfully" Feb 13 15:34:55.233510 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:34:55.855383 kubelet[2595]: E0213 15:34:55.855356 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:55.869756 kubelet[2595]: I0213 15:34:55.869712 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cz474" podStartSLOduration=5.869669016 podStartE2EDuration="5.869669016s" podCreationTimestamp="2025-02-13 15:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:55.869497345 +0000 UTC m=+81.368949359" watchObservedRunningTime="2025-02-13 15:34:55.869669016 +0000 UTC m=+81.369121030" Feb 13 15:34:55.984323 kubelet[2595]: I0213 15:34:55.984286 2595 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:34:55Z","lastTransitionTime":"2025-02-13T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:34:56.857886 kubelet[2595]: E0213 15:34:56.857852 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:57.859381 kubelet[2595]: E0213 15:34:57.859340 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:58.069211 systemd-networkd[1373]: lxc_health: Link UP Feb 13 15:34:58.079767 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 15:34:58.861889 kubelet[2595]: E0213 15:34:58.861843 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:59.799553 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 15:34:59.862111 kubelet[2595]: E0213 15:34:59.862072 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:03.370299 sshd[4422]: Connection closed by 10.0.0.1 port 40218 Feb 13 15:35:03.372328 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:03.375867 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:40218.service: Deactivated successfully. Feb 13 15:35:03.377460 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:35:03.379916 systemd-logind[1427]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:35:03.380992 systemd-logind[1427]: Removed session 26.