Feb 13 19:01:56.932268 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:01:56.932290 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:01:56.932300 kernel: KASLR enabled Feb 13 19:01:56.932306 kernel: efi: EFI v2.7 by EDK II Feb 13 19:01:56.932311 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:01:56.932317 kernel: random: crng init done Feb 13 19:01:56.932324 kernel: secureboot: Secure boot disabled Feb 13 19:01:56.932330 kernel: ACPI: Early table checksum verification disabled Feb 13 19:01:56.932342 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:01:56.932350 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:01:56.932356 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932362 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932368 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932374 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932381 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932389 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932395 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932401 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932407 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:01:56.932414 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:01:56.932420 kernel: NUMA: Failed to initialise from firmware Feb 13 19:01:56.932427 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:01:56.932433 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:01:56.932439 kernel: Zone ranges: Feb 13 19:01:56.932445 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:01:56.932453 kernel: DMA32 empty Feb 13 19:01:56.932459 kernel: Normal empty Feb 13 19:01:56.932477 kernel: Movable zone start for each node Feb 13 19:01:56.932483 kernel: Early memory node ranges Feb 13 19:01:56.932490 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:01:56.932496 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:01:56.932503 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:01:56.932510 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:01:56.932516 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:01:56.932522 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:01:56.932528 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:01:56.932539 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:01:56.932548 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:01:56.932554 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:01:56.932562 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:01:56.932571 kernel: psci: probing for conduit method from ACPI. Feb 13 19:01:56.932578 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:01:56.932585 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:01:56.932592 kernel: psci: Trusted OS migration not required Feb 13 19:01:56.932599 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:01:56.932605 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:01:56.932612 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:01:56.932619 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:01:56.932626 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:01:56.932632 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:01:56.932639 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:01:56.932645 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:01:56.932652 kernel: CPU features: detected: Spectre-v4 Feb 13 19:01:56.932659 kernel: CPU features: detected: Spectre-BHB Feb 13 19:01:56.932666 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:01:56.932673 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:01:56.932679 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:01:56.932686 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:01:56.932692 kernel: alternatives: applying boot alternatives Feb 13 19:01:56.932700 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:01:56.932707 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:01:56.932713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:01:56.932720 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:01:56.932726 kernel: Fallback order for Node 0: 0 Feb 13 19:01:56.932735 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:01:56.932741 kernel: Policy zone: DMA Feb 13 19:01:56.932748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:01:56.932754 kernel: software IO TLB: area num 4. Feb 13 19:01:56.932761 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:01:56.932767 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Feb 13 19:01:56.932774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:01:56.932781 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:01:56.932788 kernel: rcu: RCU event tracing is enabled. Feb 13 19:01:56.932794 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:01:56.932801 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:01:56.932808 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:01:56.932816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:01:56.932822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:01:56.932829 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:01:56.932835 kernel: GICv3: 256 SPIs implemented Feb 13 19:01:56.932842 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:01:56.932848 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:01:56.932855 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:01:56.932868 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:01:56.932875 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:01:56.932882 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:01:56.932888 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:01:56.932897 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:01:56.932903 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:01:56.932910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:01:56.932917 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:01:56.932924 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:01:56.932931 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:01:56.932937 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:01:56.932944 kernel: arm-pv: using stolen time PV Feb 13 19:01:56.932951 kernel: Console: colour dummy device 80x25 Feb 13 19:01:56.932970 kernel: ACPI: Core revision 20230628 Feb 13 19:01:56.932977 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:01:56.932985 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:01:56.932993 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:01:56.933000 kernel: landlock: Up and running. Feb 13 19:01:56.933007 kernel: SELinux: Initializing. Feb 13 19:01:56.933013 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:56.933020 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:56.933027 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:01:56.933034 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:01:56.933041 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:01:56.933050 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:01:56.933057 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:01:56.933063 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:01:56.933070 kernel: Remapping and enabling EFI services. Feb 13 19:01:56.933077 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:01:56.933084 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:01:56.933091 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:01:56.933098 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:01:56.933104 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:01:56.933112 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:01:56.933119 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:01:56.933131 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:01:56.933140 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:01:56.933147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:01:56.933154 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:01:56.933161 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:01:56.933168 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:01:56.933175 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:01:56.933184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:01:56.933191 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:01:56.933198 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:01:56.933205 kernel: SMP: Total of 4 processors activated. Feb 13 19:01:56.933212 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:01:56.933219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:01:56.933226 kernel: CPU features: detected: Common not Private translations Feb 13 19:01:56.933233 kernel: CPU features: detected: CRC32 instructions Feb 13 19:01:56.933242 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:01:56.933249 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:01:56.933256 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:01:56.933263 kernel: CPU features: detected: Privileged Access Never Feb 13 19:01:56.933270 kernel: CPU features: detected: RAS Extension Support Feb 13 19:01:56.933277 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:01:56.933284 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:01:56.933297 kernel: alternatives: applying system-wide alternatives Feb 13 19:01:56.933304 kernel: devtmpfs: initialized Feb 13 19:01:56.933311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:01:56.933320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:01:56.933327 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:01:56.933337 kernel: SMBIOS 3.0.0 present. Feb 13 19:01:56.933346 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:01:56.933353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:01:56.933360 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:01:56.933367 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:01:56.933374 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:01:56.933383 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:01:56.933390 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Feb 13 19:01:56.933398 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:01:56.933405 kernel: cpuidle: using governor menu Feb 13 19:01:56.933412 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:01:56.933419 kernel: ASID allocator initialised with 32768 entries Feb 13 19:01:56.933426 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:01:56.933433 kernel: Serial: AMBA PL011 UART driver Feb 13 19:01:56.933440 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:01:56.933449 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:01:56.933456 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:01:56.933463 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:01:56.933470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:01:56.933477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:01:56.933484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:01:56.933491 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:01:56.933498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:01:56.933505 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:01:56.933513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:01:56.933521 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:01:56.933528 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:01:56.933535 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:01:56.933543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:01:56.933550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:01:56.933557 kernel: ACPI: Interpreter enabled Feb 13 19:01:56.933564 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:01:56.933571 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:01:56.933578 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:01:56.933586 kernel: printk: console [ttyAMA0] enabled Feb 13 19:01:56.933594 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:01:56.933743 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:01:56.933828 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:01:56.933949 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:01:56.934018 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:01:56.934079 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:01:56.934092 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:01:56.934104 kernel: PCI host bridge to bus 0000:00 Feb 13 19:01:56.934177 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:01:56.934238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:01:56.934296 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:01:56.934362 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:01:56.934441 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:01:56.934518 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:01:56.934586 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:01:56.934651 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:01:56.934716 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:01:56.934780 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:01:56.934865 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:01:56.934937 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:01:56.935000 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:01:56.935057 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:01:56.935114 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:01:56.935123 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:01:56.935131 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:01:56.935138 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:01:56.935145 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:01:56.935154 kernel: iommu: Default domain type: Translated Feb 13 19:01:56.935161 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:01:56.935168 kernel: efivars: Registered efivars operations Feb 13 19:01:56.935175 kernel: vgaarb: loaded Feb 13 19:01:56.935182 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:01:56.935189 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:01:56.935196 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:01:56.935203 kernel: pnp: PnP ACPI init Feb 13 19:01:56.935276 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:01:56.935288 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:01:56.935295 kernel: NET: Registered PF_INET protocol family Feb 13 19:01:56.935302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:01:56.935309 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:01:56.935317 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:01:56.935324 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:01:56.935331 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:01:56.935345 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:01:56.935355 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:56.935363 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:56.935370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:01:56.935377 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:01:56.935384 kernel: kvm [1]: HYP mode not available Feb 13 19:01:56.935391 kernel: Initialise system trusted keyrings Feb 13 19:01:56.935399 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:01:56.935406 kernel: Key type asymmetric registered Feb 13 19:01:56.935413 kernel: Asymmetric key parser 'x509' registered Feb 13 19:01:56.935420 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:01:56.935428 kernel: io scheduler mq-deadline registered Feb 13 19:01:56.935436 kernel: io scheduler kyber registered Feb 13 19:01:56.935443 kernel: io scheduler bfq registered Feb 13 19:01:56.935450 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:01:56.935457 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:01:56.935465 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:01:56.935544 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:01:56.935555 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:01:56.935562 kernel: thunder_xcv, ver 1.0 Feb 13 19:01:56.935571 kernel: thunder_bgx, ver 1.0 Feb 13 19:01:56.935578 kernel: nicpf, ver 1.0 Feb 13 19:01:56.935585 kernel: nicvf, ver 1.0 Feb 13 19:01:56.935658 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:01:56.935734 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:01:56 UTC (1739473316) Feb 13 19:01:56.935744 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:01:56.935751 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:01:56.935758 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:01:56.935767 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:01:56.935774 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:01:56.935782 kernel: Segment Routing with IPv6 Feb 13 19:01:56.935789 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:01:56.935796 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:01:56.935803 kernel: Key type dns_resolver registered Feb 13 19:01:56.935810 kernel: registered taskstats version 1 Feb 13 19:01:56.935818 kernel: Loading compiled-in X.509 certificates Feb 13 19:01:56.935825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:01:56.935834 kernel: Key type .fscrypt registered Feb 13 19:01:56.935841 kernel: Key type fscrypt-provisioning registered Feb 13 19:01:56.935848 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:01:56.935855 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:01:56.935873 kernel: ima: No architecture policies found Feb 13 19:01:56.935881 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:01:56.935888 kernel: clk: Disabling unused clocks Feb 13 19:01:56.935895 kernel: Freeing unused kernel memory: 38336K Feb 13 19:01:56.935903 kernel: Run /init as init process Feb 13 19:01:56.935912 kernel: with arguments: Feb 13 19:01:56.935919 kernel: /init Feb 13 19:01:56.935926 kernel: with environment: Feb 13 19:01:56.935933 kernel: HOME=/ Feb 13 19:01:56.935940 kernel: TERM=linux Feb 13 19:01:56.935947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:01:56.935955 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:01:56.935964 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:01:56.935974 systemd[1]: Detected virtualization kvm. Feb 13 19:01:56.935981 systemd[1]: Detected architecture arm64. Feb 13 19:01:56.935989 systemd[1]: Running in initrd. Feb 13 19:01:56.935996 systemd[1]: No hostname configured, using default hostname. Feb 13 19:01:56.936004 systemd[1]: Hostname set to . Feb 13 19:01:56.936011 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:56.936019 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:01:56.936026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:56.936036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:56.936044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:01:56.936052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:56.936059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:01:56.936068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:01:56.936076 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:01:56.936085 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:01:56.936093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:56.936101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:56.936108 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:56.936116 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:56.936124 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:56.936131 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:56.936139 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:56.936146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:56.936156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:01:56.936163 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:01:56.936171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:56.936179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:56.936186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:56.936194 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:56.936201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:01:56.936209 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:56.936218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:01:56.936225 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:01:56.936233 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:56.936240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:56.936248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:56.936256 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:56.936264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:56.936273 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:01:56.936302 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:01:56.936323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:01:56.936331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:01:56.936345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:56.936353 systemd-journald[239]: Journal started Feb 13 19:01:56.936372 systemd-journald[239]: Runtime Journal (/run/log/journal/6519e5379a544962924133eca3d204ba) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:01:56.917325 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:01:56.939880 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:56.939911 kernel: Bridge firewalling registered Feb 13 19:01:56.940382 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:01:56.941665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:56.943464 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:56.961047 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:56.962807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:56.964851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:56.967772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:56.974540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:56.976111 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:56.982210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:56.993021 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:56.995258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:56.997425 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:01:57.012080 dracut-cmdline[282]: dracut-dracut-053 Feb 13 19:01:57.014707 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:01:57.029134 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:01:57.029151 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:57.029182 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:57.034669 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:01:57.035994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:57.039927 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:57.092892 kernel: SCSI subsystem initialized Feb 13 19:01:57.097875 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:01:57.105896 kernel: iscsi: registered transport (tcp) Feb 13 19:01:57.118917 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:01:57.118971 kernel: QLogic iSCSI HBA Driver Feb 13 19:01:57.163609 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:57.171023 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:01:57.186155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:01:57.186210 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:01:57.187237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:01:57.280887 kernel: raid6: neonx8 gen() 15766 MB/s Feb 13 19:01:57.297890 kernel: raid6: neonx4 gen() 15700 MB/s Feb 13 19:01:57.314882 kernel: raid6: neonx2 gen() 13136 MB/s Feb 13 19:01:57.331874 kernel: raid6: neonx1 gen() 10125 MB/s Feb 13 19:01:57.348880 kernel: raid6: int64x8 gen() 6678 MB/s Feb 13 19:01:57.365879 kernel: raid6: int64x4 gen() 7125 MB/s Feb 13 19:01:57.382877 kernel: raid6: int64x2 gen() 6105 MB/s Feb 13 19:01:57.400041 kernel: raid6: int64x1 gen() 5028 MB/s Feb 13 19:01:57.400058 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Feb 13 19:01:57.418022 kernel: raid6: .... xor() 11814 MB/s, rmw enabled Feb 13 19:01:57.418036 kernel: raid6: using neon recovery algorithm Feb 13 19:01:57.423292 kernel: xor: measuring software checksum speed Feb 13 19:01:57.423314 kernel: 8regs : 21641 MB/sec Feb 13 19:01:57.424002 kernel: 32regs : 20881 MB/sec Feb 13 19:01:57.425298 kernel: arm64_neon : 27870 MB/sec Feb 13 19:01:57.425308 kernel: xor: using function: arm64_neon (27870 MB/sec) Feb 13 19:01:57.475921 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:01:57.486591 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:57.503037 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:57.516724 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 19:01:57.522149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:57.525530 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:01:57.539934 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 19:01:57.564928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:57.575012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:57.616030 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:57.621016 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:01:57.634556 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:57.636190 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:57.638981 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:57.641514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:57.652060 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:01:57.661908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:57.680414 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:01:57.684983 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:01:57.685090 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:01:57.685100 kernel: GPT:9289727 != 19775487 Feb 13 19:01:57.685109 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:01:57.685118 kernel: GPT:9289727 != 19775487 Feb 13 19:01:57.685128 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:01:57.685138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:01:57.681026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:57.681147 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:57.687283 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:57.688401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:57.688535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:57.692872 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:57.702878 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (514) Feb 13 19:01:57.702910 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Feb 13 19:01:57.708117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:57.718961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:57.726762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:01:57.734452 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:01:57.749648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:01:57.750962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:01:57.760397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:01:57.774009 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:01:57.779018 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:57.781176 disk-uuid[554]: Primary Header is updated. Feb 13 19:01:57.781176 disk-uuid[554]: Secondary Entries is updated. Feb 13 19:01:57.781176 disk-uuid[554]: Secondary Header is updated. Feb 13 19:01:57.785236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:01:57.805174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:58.794952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:01:58.796100 disk-uuid[555]: The operation has completed successfully. Feb 13 19:01:58.819764 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:01:58.819875 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:01:58.854059 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:01:58.857912 sh[574]: Success Feb 13 19:01:58.886977 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:01:58.948974 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:01:58.950303 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:01:58.953177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:01:58.965269 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:01:58.965305 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:58.965316 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:01:58.967200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:01:58.967225 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:01:58.971032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:01:58.972597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:01:58.982016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:01:58.983829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:01:58.994275 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:58.994321 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:58.994348 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:01:58.996893 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:01:59.004797 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:01:59.006799 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:59.013108 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:01:59.019067 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:01:59.090229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:59.105059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:59.135889 systemd-networkd[760]: lo: Link UP Feb 13 19:01:59.135900 systemd-networkd[760]: lo: Gained carrier Feb 13 19:01:59.136878 systemd-networkd[760]: Enumeration completed Feb 13 19:01:59.136979 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:59.137265 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:59.137269 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:59.138226 systemd-networkd[760]: eth0: Link UP Feb 13 19:01:59.138229 systemd-networkd[760]: eth0: Gained carrier Feb 13 19:01:59.138236 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:59.138703 systemd[1]: Reached target network.target - Network. Feb 13 19:01:59.164295 ignition[674]: Ignition 2.20.0 Feb 13 19:01:59.164310 ignition[674]: Stage: fetch-offline Feb 13 19:01:59.164364 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:59.164374 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:01:59.164869 ignition[674]: parsed url from cmdline: "" Feb 13 19:01:59.164873 ignition[674]: no config URL provided Feb 13 19:01:59.164878 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:01:59.164885 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:01:59.164910 ignition[674]: op(1): [started] loading QEMU firmware config module Feb 13 19:01:59.164914 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:01:59.172912 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:01:59.184417 ignition[674]: op(1): [finished] loading QEMU firmware config module Feb 13 19:01:59.223477 ignition[674]: parsing config with SHA512: 9f79ed104416bb15fbed45171a075f8cf698612c91a975fa543ee57e6724e7d01abd119e810adf151e44551683bf171b9f33e65aa7eed8cae6a4887ccc1a5b76 Feb 13 19:01:59.229968 unknown[674]: fetched base config from "system" Feb 13 19:01:59.229981 unknown[674]: fetched user config from "qemu" Feb 13 19:01:59.232459 ignition[674]: fetch-offline: fetch-offline passed Feb 13 19:01:59.232612 ignition[674]: Ignition finished successfully Feb 13 19:01:59.233736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:59.235993 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:01:59.247051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:01:59.260367 ignition[774]: Ignition 2.20.0 Feb 13 19:01:59.260378 ignition[774]: Stage: kargs Feb 13 19:01:59.260601 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:59.260612 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:01:59.261509 ignition[774]: kargs: kargs passed Feb 13 19:01:59.264693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:01:59.261557 ignition[774]: Ignition finished successfully Feb 13 19:01:59.272026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:01:59.282240 ignition[782]: Ignition 2.20.0 Feb 13 19:01:59.282251 ignition[782]: Stage: disks Feb 13 19:01:59.282413 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:59.285096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:01:59.282422 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:01:59.286379 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:59.283344 ignition[782]: disks: disks passed Feb 13 19:01:59.288116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:01:59.283389 ignition[782]: Ignition finished successfully Feb 13 19:01:59.290058 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:59.291869 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:59.293379 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:59.309044 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:01:59.318630 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:01:59.322915 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:01:59.971972 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:02:00.013883 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:02:00.014542 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:02:00.015892 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:00.028013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:00.030046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:02:00.033463 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:02:00.033516 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:02:00.048038 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Feb 13 19:02:00.048063 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:00.048073 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:00.048083 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:00.048092 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:00.033545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:00.037298 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:02:00.040375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:02:00.050061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:00.100459 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:02:00.103721 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:02:00.107224 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:02:00.110212 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:02:00.193492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:00.209040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:02:00.212076 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:02:00.215889 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:00.231915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:02:00.232929 ignition[913]: INFO : Ignition 2.20.0 Feb 13 19:02:00.232929 ignition[913]: INFO : Stage: mount Feb 13 19:02:00.232929 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:00.232929 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:00.239902 ignition[913]: INFO : mount: mount passed Feb 13 19:02:00.239902 ignition[913]: INFO : Ignition finished successfully Feb 13 19:02:00.236412 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:02:00.242996 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:02:00.964039 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:02:00.973042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:00.978876 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Feb 13 19:02:00.978923 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:00.981479 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:00.981496 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:00.983872 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:00.984782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:00.999921 ignition[945]: INFO : Ignition 2.20.0 Feb 13 19:02:00.999921 ignition[945]: INFO : Stage: files Feb 13 19:02:01.001632 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:01.001632 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:01.001632 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:02:01.005222 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:02:01.005222 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:02:01.008119 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:02:01.008119 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:02:01.008119 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:02:01.007734 unknown[945]: wrote ssh authorized keys file for user: core Feb 13 19:02:01.014939 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:02:01.014939 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:01.069006 systemd-networkd[760]: eth0: Gained IPv6LL Feb 13 19:02:01.143917 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:02:01.592581 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:02:01.594749 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:01.594749 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:01.917714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:02:01.979232 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:01.981264 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:02:02.212164 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:02:02.448837 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:02.448837 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:02:02.453934 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:02:02.455957 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:02.474656 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:02.478399 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:02.480020 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:02.480020 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:02.480020 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:02.480020 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:02.480020 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:02.480020 ignition[945]: INFO : files: files passed Feb 13 19:02:02.480020 ignition[945]: INFO : Ignition finished successfully Feb 13 19:02:02.482564 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:02:02.498228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:02:02.502061 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:02:02.505074 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:02:02.506273 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:02:02.510527 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:02:02.512945 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:02.512945 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:02.518027 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:02.516826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:02.519710 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:02:02.533069 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:02:02.553010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:02:02.553123 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:02:02.555455 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:02:02.557348 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:02:02.559438 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:02:02.560526 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:02:02.576349 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:02.587061 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:02:02.596490 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:02.597826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:02.600042 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:02:02.601928 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:02:02.602068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:02.604767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:02:02.607035 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:02:02.608844 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:02:02.610684 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:02.612643 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:02.614761 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:02:02.616788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:02.618959 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:02:02.621164 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:02:02.623104 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:02:02.624870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:02:02.625022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:02.627612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:02.629789 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:02.632025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:02:02.632961 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:02.634424 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:02:02.634600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:02.637604 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:02:02.637780 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:02.639928 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:02:02.641694 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:02:02.644920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:02.647068 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:02:02.649370 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:02:02.650971 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:02:02.651061 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:02.652785 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:02:02.652916 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:02.654814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:02:02.654957 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:02.656888 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:02:02.656996 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:02:02.675113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:02:02.677593 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:02:02.678552 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:02:02.678696 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:02.680683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:02:02.680802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:02.687813 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:02:02.687937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:02:02.693189 ignition[1000]: INFO : Ignition 2.20.0 Feb 13 19:02:02.693189 ignition[1000]: INFO : Stage: umount Feb 13 19:02:02.693189 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:02.693189 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:02.693189 ignition[1000]: INFO : umount: umount passed Feb 13 19:02:02.693189 ignition[1000]: INFO : Ignition finished successfully Feb 13 19:02:02.692945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:02:02.693458 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:02:02.693562 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:02:02.695978 systemd[1]: Stopped target network.target - Network. Feb 13 19:02:02.697926 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:02:02.698008 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:02:02.700109 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:02:02.700169 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:02:02.701926 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:02:02.701984 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:02:02.703887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:02:02.703936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:02.706118 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:02:02.708050 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:02:02.714504 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:02:02.714621 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:02:02.718435 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:02:02.718676 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:02:02.718783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:02:02.721760 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:02:02.722831 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:02:02.722945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:02.734974 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:02:02.735926 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:02:02.736003 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:02.738229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:02.738279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:02.742542 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:02:02.742594 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:02.744694 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:02:02.744745 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:02.748078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:02.752141 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:02:02.752228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:02.758702 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:02:02.758881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:02.761430 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:02:02.761527 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:02:02.763291 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:02:02.763384 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:02:02.766208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:02:02.766303 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:02.767982 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:02:02.768043 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:02.769935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:02:02.770002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:02.772875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:02:02.772929 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:02.775784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:02.775833 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:02.777974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:02:02.778024 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:02.790040 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:02:02.791202 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:02:02.791275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:02.794542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:02:02.794591 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:02.797107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:02:02.797152 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:02.799352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:02.799400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:02.803546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:02:02.803607 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:02.803906 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:02:02.804018 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:02:02.806625 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:02:02.808825 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:02:02.818713 systemd[1]: Switching root. Feb 13 19:02:02.848117 systemd-journald[239]: Journal stopped Feb 13 19:02:03.625966 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:02:03.626028 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:02:03.626040 kernel: SELinux: policy capability open_perms=1 Feb 13 19:02:03.626050 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:02:03.626059 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:02:03.626072 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:02:03.626084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:02:03.626097 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:02:03.626106 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:02:03.626115 kernel: audit: type=1403 audit(1739473323.012:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:02:03.626126 systemd[1]: Successfully loaded SELinux policy in 32.353ms. Feb 13 19:02:03.626142 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.743ms. Feb 13 19:02:03.626154 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:03.626165 systemd[1]: Detected virtualization kvm. Feb 13 19:02:03.626175 systemd[1]: Detected architecture arm64. Feb 13 19:02:03.626186 systemd[1]: Detected first boot. Feb 13 19:02:03.626200 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:03.626210 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:02:03.626220 zram_generator::config[1051]: No configuration found. Feb 13 19:02:03.626231 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:02:03.626242 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:02:03.626253 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:02:03.626263 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:02:03.626273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:03.626285 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:02:03.626295 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:02:03.626305 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:02:03.626316 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:02:03.626336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:02:03.626348 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:02:03.626358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:02:03.626367 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:02:03.626380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:03.626390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:03.626400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:02:03.626414 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:02:03.626425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:02:03.626436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:03.626446 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:02:03.626456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:03.626467 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:02:03.626478 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:02:03.626488 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:03.626498 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:02:03.626509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:03.626520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:03.626530 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:03.626539 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:03.626552 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:02:03.626564 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:02:03.626574 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:02:03.626584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:03.626594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:03.626604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:03.626614 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:02:03.626624 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:02:03.626634 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:02:03.626645 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:02:03.626657 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:02:03.626667 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:02:03.626677 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:02:03.626687 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:02:03.626698 systemd[1]: Reached target machines.target - Containers. Feb 13 19:02:03.626708 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:02:03.626719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:03.626729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:03.626741 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:02:03.626752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:03.626762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:03.626773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:03.626783 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:02:03.626793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:03.626803 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:02:03.626814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:02:03.626824 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:02:03.626880 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:02:03.626893 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:02:03.626903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:03.626913 kernel: fuse: init (API version 7.39) Feb 13 19:02:03.626923 kernel: loop: module loaded Feb 13 19:02:03.626932 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:03.626942 kernel: ACPI: bus type drm_connector registered Feb 13 19:02:03.626952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:03.626962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:02:03.626975 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:02:03.626986 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:02:03.627022 systemd-journald[1122]: Collecting audit messages is disabled. Feb 13 19:02:03.627044 systemd-journald[1122]: Journal started Feb 13 19:02:03.627065 systemd-journald[1122]: Runtime Journal (/run/log/journal/6519e5379a544962924133eca3d204ba) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:02:03.410559 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:02:03.420960 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:02:03.421353 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:02:03.631336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:03.633281 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:02:03.633345 systemd[1]: Stopped verity-setup.service. Feb 13 19:02:03.638746 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:03.639566 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:02:03.640778 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:02:03.642147 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:02:03.643344 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:02:03.644586 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:02:03.645943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:02:03.647252 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:02:03.648737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:03.650310 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:02:03.650498 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:02:03.652092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:03.652250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:03.653850 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:03.654045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:03.655525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:03.655709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:03.657225 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:02:03.657429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:02:03.658960 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:03.659124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:03.660539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:03.662071 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:02:03.663772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:02:03.665444 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:02:03.681620 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:02:03.696978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:02:03.699310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:02:03.700452 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:02:03.700509 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:03.702530 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:02:03.705674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:02:03.707992 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:02:03.709139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:03.710402 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:02:03.712504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:02:03.713754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:03.714771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:02:03.716073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:03.720051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:03.722310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:02:03.725039 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:02:03.727692 systemd-journald[1122]: Time spent on flushing to /var/log/journal/6519e5379a544962924133eca3d204ba is 15.688ms for 875 entries. Feb 13 19:02:03.727692 systemd-journald[1122]: System Journal (/var/log/journal/6519e5379a544962924133eca3d204ba) is 8M, max 195.6M, 187.6M free. Feb 13 19:02:03.763258 systemd-journald[1122]: Received client request to flush runtime journal. Feb 13 19:02:03.763335 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 19:02:03.763357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:02:03.730965 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:03.732605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:02:03.734151 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:02:03.737715 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:02:03.739922 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:02:03.744517 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:02:03.755150 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:02:03.758270 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:02:03.763775 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 19:02:03.763784 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 19:02:03.767957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:02:03.770213 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:03.775025 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:03.785211 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:02:03.789637 kernel: loop1: detected capacity change from 0 to 123192 Feb 13 19:02:03.786908 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:02:03.789978 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:02:03.808836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:02:03.815701 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:02:03.821090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:03.835533 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Feb 13 19:02:03.835550 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Feb 13 19:02:03.839804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:03.860973 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:02:03.866894 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:02:03.873879 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:02:03.880142 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:02:03.880581 (sd-merge)[1195]: Merged extensions into '/usr'. Feb 13 19:02:03.884414 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:02:03.884437 systemd[1]: Reloading... Feb 13 19:02:03.936954 zram_generator::config[1221]: No configuration found. Feb 13 19:02:04.030793 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:02:04.040297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:04.090304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:02:04.090707 systemd[1]: Reloading finished in 205 ms. Feb 13 19:02:04.113547 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:02:04.116228 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:02:04.132191 systemd[1]: Starting ensure-sysext.service... Feb 13 19:02:04.134070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:04.142074 systemd[1]: Reload requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:02:04.142093 systemd[1]: Reloading... Feb 13 19:02:04.149938 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:02:04.150144 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:02:04.150767 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:02:04.150991 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:02:04.151045 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:02:04.153428 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:04.153440 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:02:04.162050 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:04.162068 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:02:04.197888 zram_generator::config[1289]: No configuration found. Feb 13 19:02:04.280471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:04.330423 systemd[1]: Reloading finished in 188 ms. Feb 13 19:02:04.341641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:02:04.358937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:04.367745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:04.370562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:02:04.373126 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:02:04.378234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:04.384391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:04.388608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:02:04.393789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:04.397040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:04.399940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:04.404140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:04.405808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:04.406085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:04.422327 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:02:04.426047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:04.426243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:04.428429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:02:04.432200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:04.432569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:04.434294 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:04.434471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:04.445435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:04.455273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:04.460971 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 19:02:04.462066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:04.465177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:04.467004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:04.467124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:04.472109 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:02:04.474454 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:02:04.476488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:04.476649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:04.479758 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:02:04.481081 augenrules[1366]: No rules Feb 13 19:02:04.481726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:04.482049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:04.483943 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:04.484119 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:04.487297 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:04.487468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:04.489742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:04.493955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:02:04.502261 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:02:04.513656 systemd[1]: Finished ensure-sysext.service. Feb 13 19:02:04.526077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:04.528208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:04.529628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:04.533188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:04.537062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:04.539417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:04.541159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:04.541205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:04.544073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:04.548085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:02:04.550127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:02:04.550682 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:04.550846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:04.554345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:04.554772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:04.556463 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:04.556614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:04.559877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:04.560060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:04.568485 systemd-resolved[1329]: Positive Trust Anchors: Feb 13 19:02:04.568504 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:04.575761 augenrules[1399]: /sbin/augenrules: No change Feb 13 19:02:04.568534 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:04.569011 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:02:04.569331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:04.569391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:04.579390 augenrules[1426]: No rules Feb 13 19:02:04.580853 systemd-resolved[1329]: Defaulting to hostname 'linux'. Feb 13 19:02:04.580934 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:04.581144 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:04.604358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:04.609625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:04.615922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1390) Feb 13 19:02:04.642407 systemd-networkd[1410]: lo: Link UP Feb 13 19:02:04.642417 systemd-networkd[1410]: lo: Gained carrier Feb 13 19:02:04.643366 systemd-networkd[1410]: Enumeration completed Feb 13 19:02:04.644456 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:04.646057 systemd[1]: Reached target network.target - Network. Feb 13 19:02:04.646764 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:04.646773 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:04.652696 systemd-networkd[1410]: eth0: Link UP Feb 13 19:02:04.652705 systemd-networkd[1410]: eth0: Gained carrier Feb 13 19:02:04.652720 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:04.659089 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:02:04.663733 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:02:04.665034 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:02:04.673969 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:02:04.675149 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Feb 13 19:02:04.675610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:02:04.676025 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:02:04.676075 systemd-timesyncd[1411]: Initial clock synchronization to Thu 2025-02-13 19:02:04.725982 UTC. Feb 13 19:02:04.677456 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:02:04.687475 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:02:04.694661 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:02:04.701358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:04.704894 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:02:04.716050 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:02:04.726048 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:02:04.744375 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:04.746194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:04.771547 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:02:04.773105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:04.775026 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:04.776150 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:02:04.777430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:02:04.778823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:02:04.779970 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:02:04.781203 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:02:04.782445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:02:04.782481 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:04.783379 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:04.784839 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:02:04.787217 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:02:04.790275 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:02:04.791732 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:02:04.793010 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:02:04.796607 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:02:04.798366 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:02:04.800698 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:02:04.802385 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:02:04.803614 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:04.804626 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:04.805656 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:04.805690 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:04.806595 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:02:04.808256 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:04.810664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:02:04.812828 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:02:04.818076 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:02:04.819438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:02:04.820476 jq[1464]: false Feb 13 19:02:04.820566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:02:04.830709 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:02:04.836747 extend-filesystems[1465]: Found loop3 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found loop4 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found loop5 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda1 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda2 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda3 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found usr Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda4 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda6 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda7 Feb 13 19:02:04.836747 extend-filesystems[1465]: Found vda9 Feb 13 19:02:04.836747 extend-filesystems[1465]: Checking size of /dev/vda9 Feb 13 19:02:04.840094 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:02:04.840826 dbus-daemon[1463]: [system] SELinux support is enabled Feb 13 19:02:04.852737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:02:04.853193 extend-filesystems[1465]: Resized partition /dev/vda9 Feb 13 19:02:04.857376 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:02:04.858095 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:02:04.861847 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:02:04.865680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1385) Feb 13 19:02:04.862675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:02:04.864719 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:02:04.868844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:02:04.870664 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:02:04.873100 jq[1485]: true Feb 13 19:02:04.874891 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:02:04.877189 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:02:04.877381 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:02:04.877635 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:02:04.877804 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:02:04.880210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:02:04.880422 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:02:04.889973 jq[1488]: true Feb 13 19:02:04.890445 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:02:04.899445 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:02:04.899479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:02:04.900955 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:02:04.900979 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:02:04.921520 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:02:04.921586 tar[1487]: linux-arm64/helm Feb 13 19:02:04.921027 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:02:04.923061 systemd-logind[1481]: New seat seat0. Feb 13 19:02:04.925522 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:02:04.955220 update_engine[1483]: I20250213 19:02:04.954984 1483 main.cc:92] Flatcar Update Engine starting Feb 13 19:02:04.958678 update_engine[1483]: I20250213 19:02:04.957347 1483 update_check_scheduler.cc:74] Next update check in 9m52s Feb 13 19:02:04.957331 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:02:04.973186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:02:05.033918 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:02:05.045358 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:02:05.048678 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:02:05.048678 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:02:05.048678 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:02:05.059596 extend-filesystems[1465]: Resized filesystem in /dev/vda9 Feb 13 19:02:05.060556 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:05.051925 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:02:05.052182 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:02:05.059753 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:02:05.066430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:02:05.176312 containerd[1489]: time="2025-02-13T19:02:05.176062647Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:02:05.205501 containerd[1489]: time="2025-02-13T19:02:05.205190507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.206797 containerd[1489]: time="2025-02-13T19:02:05.206757194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:05.206797 containerd[1489]: time="2025-02-13T19:02:05.206796559Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:02:05.206908 containerd[1489]: time="2025-02-13T19:02:05.206814310Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:02:05.207017 containerd[1489]: time="2025-02-13T19:02:05.206993910Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:02:05.207052 containerd[1489]: time="2025-02-13T19:02:05.207020153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207117 containerd[1489]: time="2025-02-13T19:02:05.207079886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207117 containerd[1489]: time="2025-02-13T19:02:05.207097073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207310 containerd[1489]: time="2025-02-13T19:02:05.207290921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207338 containerd[1489]: time="2025-02-13T19:02:05.207310483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207338 containerd[1489]: time="2025-02-13T19:02:05.207323887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207338 containerd[1489]: time="2025-02-13T19:02:05.207334191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207433 containerd[1489]: time="2025-02-13T19:02:05.207402779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207627 containerd[1489]: time="2025-02-13T19:02:05.207607253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207753 containerd[1489]: time="2025-02-13T19:02:05.207734849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:05.207781 containerd[1489]: time="2025-02-13T19:02:05.207753284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:02:05.207850 containerd[1489]: time="2025-02-13T19:02:05.207834228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:02:05.207918 containerd[1489]: time="2025-02-13T19:02:05.207902655Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:02:05.214847 containerd[1489]: time="2025-02-13T19:02:05.214748008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:02:05.214847 containerd[1489]: time="2025-02-13T19:02:05.214809471Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:02:05.214847 containerd[1489]: time="2025-02-13T19:02:05.214826900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:02:05.214847 containerd[1489]: time="2025-02-13T19:02:05.214843242Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:02:05.214847 containerd[1489]: time="2025-02-13T19:02:05.214857048Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:02:05.215181 containerd[1489]: time="2025-02-13T19:02:05.215028597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:02:05.215376 containerd[1489]: time="2025-02-13T19:02:05.215356320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:02:05.215474 containerd[1489]: time="2025-02-13T19:02:05.215457269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:02:05.215511 containerd[1489]: time="2025-02-13T19:02:05.215478401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:02:05.215511 containerd[1489]: time="2025-02-13T19:02:05.215493173Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:02:05.215511 containerd[1489]: time="2025-02-13T19:02:05.215506939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215571 containerd[1489]: time="2025-02-13T19:02:05.215520665Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215571 containerd[1489]: time="2025-02-13T19:02:05.215533706Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215571 containerd[1489]: time="2025-02-13T19:02:05.215558058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215623 containerd[1489]: time="2025-02-13T19:02:05.215573796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215623 containerd[1489]: time="2025-02-13T19:02:05.215586918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215623 containerd[1489]: time="2025-02-13T19:02:05.215598872Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215623 containerd[1489]: time="2025-02-13T19:02:05.215611108Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:02:05.215700 containerd[1489]: time="2025-02-13T19:02:05.215631395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215700 containerd[1489]: time="2025-02-13T19:02:05.215656994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215735 containerd[1489]: time="2025-02-13T19:02:05.215709522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215735 containerd[1489]: time="2025-02-13T19:02:05.215722603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215775 containerd[1489]: time="2025-02-13T19:02:05.215735001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215775 containerd[1489]: time="2025-02-13T19:02:05.215749008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215775 containerd[1489]: time="2025-02-13T19:02:05.215761285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215826 containerd[1489]: time="2025-02-13T19:02:05.215775010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215826 containerd[1489]: time="2025-02-13T19:02:05.215788414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215826 containerd[1489]: time="2025-02-13T19:02:05.215802502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.215826 containerd[1489]: time="2025-02-13T19:02:05.215814255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215826209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215839049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215853016Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215902404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215922168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216049 containerd[1489]: time="2025-02-13T19:02:05.215933478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216118149Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216137590Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216147975Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216159688Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216168262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216180095Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216189756Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:02:05.216330 containerd[1489]: time="2025-02-13T19:02:05.216199215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:02:05.216598 containerd[1489]: time="2025-02-13T19:02:05.216545493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:02:05.216598 containerd[1489]: time="2025-02-13T19:02:05.216597538Z" level=info msg="Connect containerd service" Feb 13 19:02:05.216808 containerd[1489]: time="2025-02-13T19:02:05.216629940Z" level=info msg="using legacy CRI server" Feb 13 19:02:05.216808 containerd[1489]: time="2025-02-13T19:02:05.216637185Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:02:05.216930 containerd[1489]: time="2025-02-13T19:02:05.216898977Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:02:05.217581 containerd[1489]: time="2025-02-13T19:02:05.217556073Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218050314Z" level=info msg="Start subscribing containerd event" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218121397Z" level=info msg="Start recovering state" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218189462Z" level=info msg="Start event monitor" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218202422Z" level=info msg="Start snapshots syncer" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218210996Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218217879Z" level=info msg="Start streaming server" Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218079295Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:02:05.218441 containerd[1489]: time="2025-02-13T19:02:05.218424608Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:02:05.218643 containerd[1489]: time="2025-02-13T19:02:05.218482730Z" level=info msg="containerd successfully booted in 0.043542s" Feb 13 19:02:05.218563 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:02:05.340382 tar[1487]: linux-arm64/LICENSE Feb 13 19:02:05.340498 tar[1487]: linux-arm64/README.md Feb 13 19:02:05.353411 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:02:06.175703 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:02:06.194389 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:02:06.211191 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:02:06.216547 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:02:06.216795 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:02:06.219714 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:02:06.231992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:02:06.244327 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:02:06.246921 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:02:06.248272 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:02:06.381725 systemd-networkd[1410]: eth0: Gained IPv6LL Feb 13 19:02:06.384200 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:02:06.386162 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:02:06.402174 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:02:06.404975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:06.407306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:02:06.425394 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:02:06.425637 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:02:06.427721 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:02:06.430588 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:02:06.924488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:06.926353 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:06.929253 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:06.929742 systemd[1]: Startup finished in 554ms (kernel) + 6.313s (initrd) + 3.951s (userspace) = 10.819s. Feb 13 19:02:07.412934 kubelet[1577]: E0213 19:02:07.412825 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:07.414945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:07.415083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:07.416955 systemd[1]: kubelet.service: Consumed 826ms CPU time, 242.2M memory peak. Feb 13 19:02:09.221627 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:02:09.222886 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Feb 13 19:02:09.289548 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:09.291449 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:09.315353 systemd-logind[1481]: New session 1 of user core. Feb 13 19:02:09.316337 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:02:09.334131 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:02:09.342709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:02:09.344795 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:02:09.350637 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:02:09.352715 systemd-logind[1481]: New session c1 of user core. Feb 13 19:02:09.461734 systemd[1595]: Queued start job for default target default.target. Feb 13 19:02:09.472781 systemd[1595]: Created slice app.slice - User Application Slice. Feb 13 19:02:09.472810 systemd[1595]: Reached target paths.target - Paths. Feb 13 19:02:09.472848 systemd[1595]: Reached target timers.target - Timers. Feb 13 19:02:09.474112 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:02:09.482904 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:02:09.482964 systemd[1595]: Reached target sockets.target - Sockets. Feb 13 19:02:09.483003 systemd[1595]: Reached target basic.target - Basic System. Feb 13 19:02:09.483030 systemd[1595]: Reached target default.target - Main User Target. Feb 13 19:02:09.483061 systemd[1595]: Startup finished in 125ms. Feb 13 19:02:09.483199 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:02:09.484435 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:02:09.550589 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). Feb 13 19:02:09.601107 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:09.602409 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:09.606235 systemd-logind[1481]: New session 2 of user core. Feb 13 19:02:09.613040 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:02:09.664072 sshd[1608]: Connection closed by 10.0.0.1 port 53132 Feb 13 19:02:09.664526 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:09.675925 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:53132.service: Deactivated successfully. Feb 13 19:02:09.677385 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:02:09.678730 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:02:09.679767 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:53138.service - OpenSSH per-connection server daemon (10.0.0.1:53138). Feb 13 19:02:09.680665 systemd-logind[1481]: Removed session 2. Feb 13 19:02:09.721639 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 53138 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:09.722919 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:09.726560 systemd-logind[1481]: New session 3 of user core. Feb 13 19:02:09.737039 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:02:09.784315 sshd[1616]: Connection closed by 10.0.0.1 port 53138 Feb 13 19:02:09.784718 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:09.795974 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:53138.service: Deactivated successfully. Feb 13 19:02:09.797420 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:02:09.800547 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:02:09.816141 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:53140.service - OpenSSH per-connection server daemon (10.0.0.1:53140). Feb 13 19:02:09.817338 systemd-logind[1481]: Removed session 3. Feb 13 19:02:09.853745 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 53140 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:09.854979 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:09.858631 systemd-logind[1481]: New session 4 of user core. Feb 13 19:02:09.871035 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:02:09.922965 sshd[1624]: Connection closed by 10.0.0.1 port 53140 Feb 13 19:02:09.923218 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:09.933745 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:53140.service: Deactivated successfully. Feb 13 19:02:09.935065 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:02:09.936171 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:02:09.938148 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Feb 13 19:02:09.939240 systemd-logind[1481]: Removed session 4. Feb 13 19:02:09.979074 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:09.980220 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:09.985067 systemd-logind[1481]: New session 5 of user core. Feb 13 19:02:09.993036 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:02:10.054695 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:02:10.055006 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:10.078610 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:10.083400 sshd[1632]: Connection closed by 10.0.0.1 port 53142 Feb 13 19:02:10.084113 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:10.099224 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:53142.service: Deactivated successfully. Feb 13 19:02:10.100768 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:02:10.102157 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:02:10.115263 systemd[1]: Started sshd@5-10.0.0.40:22-10.0.0.1:53152.service - OpenSSH per-connection server daemon (10.0.0.1:53152). Feb 13 19:02:10.116171 systemd-logind[1481]: Removed session 5. Feb 13 19:02:10.153355 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 53152 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:10.154559 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:10.158771 systemd-logind[1481]: New session 6 of user core. Feb 13 19:02:10.168070 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:02:10.220295 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:02:10.220584 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:10.223680 sudo[1643]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:10.228371 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:02:10.228642 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:10.246164 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:10.268553 augenrules[1665]: No rules Feb 13 19:02:10.269709 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:10.271912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:10.272845 sudo[1642]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:10.274093 sshd[1641]: Connection closed by 10.0.0.1 port 53152 Feb 13 19:02:10.274469 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:10.284624 systemd[1]: sshd@5-10.0.0.40:22-10.0.0.1:53152.service: Deactivated successfully. Feb 13 19:02:10.286038 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:02:10.287184 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:02:10.288215 systemd[1]: Started sshd@6-10.0.0.40:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Feb 13 19:02:10.289004 systemd-logind[1481]: Removed session 6. Feb 13 19:02:10.329641 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:10.330890 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:10.334955 systemd-logind[1481]: New session 7 of user core. Feb 13 19:02:10.343040 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:02:10.394601 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:02:10.394878 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:10.724194 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:02:10.724223 (dockerd)[1699]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:02:10.963443 dockerd[1699]: time="2025-02-13T19:02:10.963380451Z" level=info msg="Starting up" Feb 13 19:02:11.175410 dockerd[1699]: time="2025-02-13T19:02:11.175301421Z" level=info msg="Loading containers: start." Feb 13 19:02:11.305893 kernel: Initializing XFRM netlink socket Feb 13 19:02:11.365554 systemd-networkd[1410]: docker0: Link UP Feb 13 19:02:11.397135 dockerd[1699]: time="2025-02-13T19:02:11.397080762Z" level=info msg="Loading containers: done." Feb 13 19:02:11.412907 dockerd[1699]: time="2025-02-13T19:02:11.412838400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:02:11.413053 dockerd[1699]: time="2025-02-13T19:02:11.412955488Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:02:11.413166 dockerd[1699]: time="2025-02-13T19:02:11.413136476Z" level=info msg="Daemon has completed initialization" Feb 13 19:02:11.439100 dockerd[1699]: time="2025-02-13T19:02:11.438973130Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:02:11.439471 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:02:12.130704 containerd[1489]: time="2025-02-13T19:02:12.130585291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:02:12.760279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094551578.mount: Deactivated successfully. Feb 13 19:02:14.544650 containerd[1489]: time="2025-02-13T19:02:14.544594339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:14.545175 containerd[1489]: time="2025-02-13T19:02:14.545126221Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:02:14.545879 containerd[1489]: time="2025-02-13T19:02:14.545835836Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:14.549039 containerd[1489]: time="2025-02-13T19:02:14.548985405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:14.550262 containerd[1489]: time="2025-02-13T19:02:14.550226903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.419593498s" Feb 13 19:02:14.550505 containerd[1489]: time="2025-02-13T19:02:14.550357428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:02:14.569194 containerd[1489]: time="2025-02-13T19:02:14.569142511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:02:16.498572 containerd[1489]: time="2025-02-13T19:02:16.498523319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:16.500064 containerd[1489]: time="2025-02-13T19:02:16.500015791Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:02:16.501103 containerd[1489]: time="2025-02-13T19:02:16.501055009Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:16.503979 containerd[1489]: time="2025-02-13T19:02:16.503948622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:16.505212 containerd[1489]: time="2025-02-13T19:02:16.505166298Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.935984516s" Feb 13 19:02:16.505212 containerd[1489]: time="2025-02-13T19:02:16.505196461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:02:16.526753 containerd[1489]: time="2025-02-13T19:02:16.526652118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:02:17.665449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:17.675033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:17.765802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:17.769582 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:17.810668 kubelet[1986]: E0213 19:02:17.810564 1986 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:17.813559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:17.813704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:17.814185 systemd[1]: kubelet.service: Consumed 132ms CPU time, 97.4M memory peak. Feb 13 19:02:18.190991 containerd[1489]: time="2025-02-13T19:02:18.190947591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.191647 containerd[1489]: time="2025-02-13T19:02:18.191594465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:02:18.192149 containerd[1489]: time="2025-02-13T19:02:18.192122929Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.195501 containerd[1489]: time="2025-02-13T19:02:18.195460454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.196960 containerd[1489]: time="2025-02-13T19:02:18.196920787Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.670225851s" Feb 13 19:02:18.196999 containerd[1489]: time="2025-02-13T19:02:18.196959190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:02:18.214979 containerd[1489]: time="2025-02-13T19:02:18.214946694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:02:19.515686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928476828.mount: Deactivated successfully. Feb 13 19:02:19.857605 containerd[1489]: time="2025-02-13T19:02:19.857452338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.861910 containerd[1489]: time="2025-02-13T19:02:19.861774715Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:02:19.862475 containerd[1489]: time="2025-02-13T19:02:19.862440759Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.866118 containerd[1489]: time="2025-02-13T19:02:19.866071548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.866503 containerd[1489]: time="2025-02-13T19:02:19.866470894Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.651351571s" Feb 13 19:02:19.866533 containerd[1489]: time="2025-02-13T19:02:19.866506969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:02:19.888910 containerd[1489]: time="2025-02-13T19:02:19.888851563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:02:20.561890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539293540.mount: Deactivated successfully. Feb 13 19:02:21.222593 containerd[1489]: time="2025-02-13T19:02:21.222538138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.224253 containerd[1489]: time="2025-02-13T19:02:21.224199728Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:02:21.224948 containerd[1489]: time="2025-02-13T19:02:21.224922103Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.231504 containerd[1489]: time="2025-02-13T19:02:21.231221164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.232323 containerd[1489]: time="2025-02-13T19:02:21.232281589Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.343377858s" Feb 13 19:02:21.232323 containerd[1489]: time="2025-02-13T19:02:21.232319057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:02:21.251318 containerd[1489]: time="2025-02-13T19:02:21.251284013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:02:21.847199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933792317.mount: Deactivated successfully. Feb 13 19:02:21.852638 containerd[1489]: time="2025-02-13T19:02:21.852581425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.854225 containerd[1489]: time="2025-02-13T19:02:21.854175044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:02:21.854928 containerd[1489]: time="2025-02-13T19:02:21.854894977Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.857724 containerd[1489]: time="2025-02-13T19:02:21.857689606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.858568 containerd[1489]: time="2025-02-13T19:02:21.858533870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 607.21259ms" Feb 13 19:02:21.858611 containerd[1489]: time="2025-02-13T19:02:21.858569457Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:02:21.877256 containerd[1489]: time="2025-02-13T19:02:21.877202247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:02:22.442920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418249987.mount: Deactivated successfully. Feb 13 19:02:24.925514 containerd[1489]: time="2025-02-13T19:02:24.925457221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:24.926043 containerd[1489]: time="2025-02-13T19:02:24.926004092Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:02:24.926952 containerd[1489]: time="2025-02-13T19:02:24.926913744Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:24.930269 containerd[1489]: time="2025-02-13T19:02:24.930208137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:24.931698 containerd[1489]: time="2025-02-13T19:02:24.931666901Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.054421344s" Feb 13 19:02:24.931742 containerd[1489]: time="2025-02-13T19:02:24.931699837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:02:28.064152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:28.078457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:28.166467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:28.170593 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:28.211978 kubelet[2212]: E0213 19:02:28.211923 2212 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:28.214636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:28.214911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:28.215291 systemd[1]: kubelet.service: Consumed 128ms CPU time, 95.3M memory peak. Feb 13 19:02:30.462976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:30.463238 systemd[1]: kubelet.service: Consumed 128ms CPU time, 95.3M memory peak. Feb 13 19:02:30.475139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:30.495661 systemd[1]: Reload requested from client PID 2227 ('systemctl') (unit session-7.scope)... Feb 13 19:02:30.495677 systemd[1]: Reloading... Feb 13 19:02:30.572008 zram_generator::config[2274]: No configuration found. Feb 13 19:02:30.659348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:30.732166 systemd[1]: Reloading finished in 236 ms. Feb 13 19:02:30.779529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:30.782085 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:30.782328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:30.782392 systemd[1]: kubelet.service: Consumed 82ms CPU time, 82.4M memory peak. Feb 13 19:02:30.784104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:30.897882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:30.902919 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:30.948427 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:30.948427 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:30.948427 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:30.949389 kubelet[2318]: I0213 19:02:30.949333 2318 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:32.049646 kubelet[2318]: I0213 19:02:32.049598 2318 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:02:32.049646 kubelet[2318]: I0213 19:02:32.049632 2318 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:32.050037 kubelet[2318]: I0213 19:02:32.049845 2318 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:02:32.081291 kubelet[2318]: E0213 19:02:32.081241 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.081429 kubelet[2318]: I0213 19:02:32.081376 2318 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:32.091919 kubelet[2318]: I0213 19:02:32.091883 2318 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:32.093184 kubelet[2318]: I0213 19:02:32.093129 2318 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:32.093380 kubelet[2318]: I0213 19:02:32.093182 2318 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:02:32.093464 kubelet[2318]: I0213 19:02:32.093454 2318 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:32.093464 kubelet[2318]: I0213 19:02:32.093464 2318 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:02:32.093753 kubelet[2318]: I0213 19:02:32.093727 2318 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:32.094790 kubelet[2318]: I0213 19:02:32.094714 2318 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:02:32.094790 kubelet[2318]: I0213 19:02:32.094736 2318 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:32.095900 kubelet[2318]: I0213 19:02:32.095064 2318 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:02:32.095900 kubelet[2318]: I0213 19:02:32.095214 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:32.096426 kubelet[2318]: W0213 19:02:32.096081 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.096426 kubelet[2318]: E0213 19:02:32.096143 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.096426 kubelet[2318]: W0213 19:02:32.096351 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.096426 kubelet[2318]: E0213 19:02:32.096393 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.098280 kubelet[2318]: I0213 19:02:32.098260 2318 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:32.099887 kubelet[2318]: I0213 19:02:32.098750 2318 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:32.099887 kubelet[2318]: W0213 19:02:32.098881 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:32.103146 kubelet[2318]: I0213 19:02:32.103116 2318 server.go:1264] "Started kubelet" Feb 13 19:02:32.104350 kubelet[2318]: I0213 19:02:32.103385 2318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:32.104954 kubelet[2318]: I0213 19:02:32.104792 2318 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:02:32.106694 kubelet[2318]: E0213 19:02:32.106383 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9cc305cb9b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:32.103082423 +0000 UTC m=+1.196722469,LastTimestamp:2025-02-13 19:02:32.103082423 +0000 UTC m=+1.196722469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:32.107496 kubelet[2318]: I0213 19:02:32.107072 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:32.107496 kubelet[2318]: I0213 19:02:32.107378 2318 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:32.111706 kubelet[2318]: I0213 19:02:32.110544 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:32.111706 kubelet[2318]: I0213 19:02:32.111475 2318 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:02:32.111706 kubelet[2318]: I0213 19:02:32.111646 2318 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:32.112491 kubelet[2318]: E0213 19:02:32.112443 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="200ms" Feb 13 19:02:32.113841 kubelet[2318]: I0213 19:02:32.113801 2318 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:32.114327 kubelet[2318]: I0213 19:02:32.114281 2318 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:32.114571 kubelet[2318]: I0213 19:02:32.114403 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:32.114571 kubelet[2318]: W0213 19:02:32.114417 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.114571 kubelet[2318]: E0213 19:02:32.114469 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.114682 kubelet[2318]: E0213 19:02:32.114605 2318 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:32.115527 kubelet[2318]: I0213 19:02:32.115319 2318 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:32.129406 kubelet[2318]: I0213 19:02:32.129245 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:32.130385 kubelet[2318]: I0213 19:02:32.130360 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:32.130553 kubelet[2318]: I0213 19:02:32.130544 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:32.130641 kubelet[2318]: I0213 19:02:32.130630 2318 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:02:32.130746 kubelet[2318]: E0213 19:02:32.130718 2318 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:32.132419 kubelet[2318]: W0213 19:02:32.132352 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.132534 kubelet[2318]: E0213 19:02:32.132427 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.132534 kubelet[2318]: I0213 19:02:32.132477 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:32.132534 kubelet[2318]: I0213 19:02:32.132488 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:32.132534 kubelet[2318]: I0213 19:02:32.132507 2318 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:32.197365 kubelet[2318]: I0213 19:02:32.197334 2318 policy_none.go:49] "None policy: Start" Feb 13 19:02:32.198189 kubelet[2318]: I0213 19:02:32.198157 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:32.198189 kubelet[2318]: I0213 19:02:32.198186 2318 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:32.203322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:32.213044 kubelet[2318]: I0213 19:02:32.212949 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:32.213432 kubelet[2318]: E0213 19:02:32.213401 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Feb 13 19:02:32.218782 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:32.221719 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:32.231840 kubelet[2318]: E0213 19:02:32.231800 2318 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:02:32.232811 kubelet[2318]: I0213 19:02:32.232769 2318 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:32.233062 kubelet[2318]: I0213 19:02:32.233015 2318 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:32.233193 kubelet[2318]: I0213 19:02:32.233138 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:32.234372 kubelet[2318]: E0213 19:02:32.234330 2318 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:02:32.313111 kubelet[2318]: E0213 19:02:32.312959 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="400ms" Feb 13 19:02:32.414677 kubelet[2318]: I0213 19:02:32.414640 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:32.415042 kubelet[2318]: E0213 19:02:32.414990 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Feb 13 19:02:32.432363 kubelet[2318]: I0213 19:02:32.432292 2318 topology_manager.go:215] "Topology Admit Handler" podUID="34f125a2ede4639301c884fcc298a839" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:02:32.433581 kubelet[2318]: I0213 19:02:32.433533 2318 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:02:32.434491 kubelet[2318]: I0213 19:02:32.434460 2318 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:02:32.440855 systemd[1]: Created slice kubepods-burstable-pod34f125a2ede4639301c884fcc298a839.slice - libcontainer container kubepods-burstable-pod34f125a2ede4639301c884fcc298a839.slice. Feb 13 19:02:32.465793 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:02:32.478513 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:02:32.516510 kubelet[2318]: I0213 19:02:32.516461 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:32.516653 kubelet[2318]: I0213 19:02:32.516521 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:32.516653 kubelet[2318]: I0213 19:02:32.516544 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:32.516653 kubelet[2318]: I0213 19:02:32.516578 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:32.516653 kubelet[2318]: I0213 19:02:32.516594 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:32.516653 kubelet[2318]: I0213 19:02:32.516610 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:32.516764 kubelet[2318]: I0213 19:02:32.516627 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:32.516764 kubelet[2318]: I0213 19:02:32.516659 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:32.516764 kubelet[2318]: I0213 19:02:32.516690 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:32.713665 kubelet[2318]: E0213 19:02:32.713495 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="800ms" Feb 13 19:02:32.763182 kubelet[2318]: E0213 19:02:32.763122 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:32.763930 containerd[1489]: time="2025-02-13T19:02:32.763887022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34f125a2ede4639301c884fcc298a839,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:32.768873 kubelet[2318]: E0213 19:02:32.768817 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:32.769367 containerd[1489]: time="2025-02-13T19:02:32.769320109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:32.781138 kubelet[2318]: E0213 19:02:32.780850 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:32.781938 containerd[1489]: time="2025-02-13T19:02:32.781883290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:32.816438 kubelet[2318]: I0213 19:02:32.816410 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:32.817018 kubelet[2318]: E0213 19:02:32.816958 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Feb 13 19:02:32.912786 kubelet[2318]: W0213 19:02:32.912700 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:32.912786 kubelet[2318]: E0213 19:02:32.912764 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.110825 kubelet[2318]: W0213 19:02:33.110675 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.110825 kubelet[2318]: E0213 19:02:33.110747 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.172652 kubelet[2318]: W0213 19:02:33.172608 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.172652 kubelet[2318]: E0213 19:02:33.172652 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.256828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310777639.mount: Deactivated successfully. Feb 13 19:02:33.263498 containerd[1489]: time="2025-02-13T19:02:33.263445788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:33.264755 containerd[1489]: time="2025-02-13T19:02:33.264246347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:02:33.268197 containerd[1489]: time="2025-02-13T19:02:33.268101162Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:33.269275 containerd[1489]: time="2025-02-13T19:02:33.269240092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:33.269984 containerd[1489]: time="2025-02-13T19:02:33.269916913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:33.270876 containerd[1489]: time="2025-02-13T19:02:33.270785643Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:33.271670 containerd[1489]: time="2025-02-13T19:02:33.271419497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:33.273325 containerd[1489]: time="2025-02-13T19:02:33.273260052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:33.274275 containerd[1489]: time="2025-02-13T19:02:33.274244838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.276122ms" Feb 13 19:02:33.276948 containerd[1489]: time="2025-02-13T19:02:33.276804380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.841197ms" Feb 13 19:02:33.279882 containerd[1489]: time="2025-02-13T19:02:33.279830712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.427949ms" Feb 13 19:02:33.415205 containerd[1489]: time="2025-02-13T19:02:33.410105583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:33.415205 containerd[1489]: time="2025-02-13T19:02:33.412236741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:33.415205 containerd[1489]: time="2025-02-13T19:02:33.412263265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.415205 containerd[1489]: time="2025-02-13T19:02:33.412914882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:33.415969 containerd[1489]: time="2025-02-13T19:02:33.413041861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:33.415969 containerd[1489]: time="2025-02-13T19:02:33.413063424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.416188 containerd[1489]: time="2025-02-13T19:02:33.415960896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.416773 containerd[1489]: time="2025-02-13T19:02:33.416500937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.416773 containerd[1489]: time="2025-02-13T19:02:33.416444728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:33.417337 containerd[1489]: time="2025-02-13T19:02:33.417285254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:33.418095 containerd[1489]: time="2025-02-13T19:02:33.417904586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.418095 containerd[1489]: time="2025-02-13T19:02:33.418031645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:33.446131 systemd[1]: Started cri-containerd-99e81f30fffc6da89fe88d79dcc09d1c074769bd0e507eb2d87643a73b03a260.scope - libcontainer container 99e81f30fffc6da89fe88d79dcc09d1c074769bd0e507eb2d87643a73b03a260. Feb 13 19:02:33.447685 systemd[1]: Started cri-containerd-9a7d2ba1bcf07d921e2a43af383bbf0223820c8d86965d086e036a06ca36ee7d.scope - libcontainer container 9a7d2ba1bcf07d921e2a43af383bbf0223820c8d86965d086e036a06ca36ee7d. Feb 13 19:02:33.449257 systemd[1]: Started cri-containerd-dd746086a86d8cbb849242f1288800c9b1cf7ac64b4db51775857d465855f534.scope - libcontainer container dd746086a86d8cbb849242f1288800c9b1cf7ac64b4db51775857d465855f534. Feb 13 19:02:33.484044 containerd[1489]: time="2025-02-13T19:02:33.483975361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34f125a2ede4639301c884fcc298a839,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a7d2ba1bcf07d921e2a43af383bbf0223820c8d86965d086e036a06ca36ee7d\"" Feb 13 19:02:33.485445 kubelet[2318]: E0213 19:02:33.485413 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:33.485624 containerd[1489]: time="2025-02-13T19:02:33.485566198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e81f30fffc6da89fe88d79dcc09d1c074769bd0e507eb2d87643a73b03a260\"" Feb 13 19:02:33.486850 kubelet[2318]: E0213 19:02:33.486825 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:33.490420 containerd[1489]: time="2025-02-13T19:02:33.490377196Z" level=info msg="CreateContainer within sandbox \"9a7d2ba1bcf07d921e2a43af383bbf0223820c8d86965d086e036a06ca36ee7d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:33.491366 containerd[1489]: time="2025-02-13T19:02:33.491330258Z" level=info msg="CreateContainer within sandbox \"99e81f30fffc6da89fe88d79dcc09d1c074769bd0e507eb2d87643a73b03a260\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:33.505378 containerd[1489]: time="2025-02-13T19:02:33.505329586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd746086a86d8cbb849242f1288800c9b1cf7ac64b4db51775857d465855f534\"" Feb 13 19:02:33.506244 kubelet[2318]: E0213 19:02:33.506218 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:33.508549 containerd[1489]: time="2025-02-13T19:02:33.508499619Z" level=info msg="CreateContainer within sandbox \"dd746086a86d8cbb849242f1288800c9b1cf7ac64b4db51775857d465855f534\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:33.514180 kubelet[2318]: E0213 19:02:33.514128 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:33.517141 containerd[1489]: time="2025-02-13T19:02:33.517095021Z" level=info msg="CreateContainer within sandbox \"99e81f30fffc6da89fe88d79dcc09d1c074769bd0e507eb2d87643a73b03a260\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd869304ddb79521b0f93656d35348bac00ab2f711ba622eec0d95a245ebfa79\"" Feb 13 19:02:33.517925 containerd[1489]: time="2025-02-13T19:02:33.517896180Z" level=info msg="StartContainer for \"cd869304ddb79521b0f93656d35348bac00ab2f711ba622eec0d95a245ebfa79\"" Feb 13 19:02:33.520535 containerd[1489]: time="2025-02-13T19:02:33.520359388Z" level=info msg="CreateContainer within sandbox \"9a7d2ba1bcf07d921e2a43af383bbf0223820c8d86965d086e036a06ca36ee7d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4612f8e257229406f0f88f45ccfd2b53f2b3817d429bd9a74768eca50acd087\"" Feb 13 19:02:33.521018 containerd[1489]: time="2025-02-13T19:02:33.520945795Z" level=info msg="StartContainer for \"d4612f8e257229406f0f88f45ccfd2b53f2b3817d429bd9a74768eca50acd087\"" Feb 13 19:02:33.529724 containerd[1489]: time="2025-02-13T19:02:33.529682418Z" level=info msg="CreateContainer within sandbox \"dd746086a86d8cbb849242f1288800c9b1cf7ac64b4db51775857d465855f534\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c50fcef22ea3ef6588c2f787915066b2d1186ffbf821ab4bd589af42cbb9a083\"" Feb 13 19:02:33.530888 containerd[1489]: time="2025-02-13T19:02:33.530342637Z" level=info msg="StartContainer for \"c50fcef22ea3ef6588c2f787915066b2d1186ffbf821ab4bd589af42cbb9a083\"" Feb 13 19:02:33.547117 systemd[1]: Started cri-containerd-cd869304ddb79521b0f93656d35348bac00ab2f711ba622eec0d95a245ebfa79.scope - libcontainer container cd869304ddb79521b0f93656d35348bac00ab2f711ba622eec0d95a245ebfa79. Feb 13 19:02:33.551391 systemd[1]: Started cri-containerd-d4612f8e257229406f0f88f45ccfd2b53f2b3817d429bd9a74768eca50acd087.scope - libcontainer container d4612f8e257229406f0f88f45ccfd2b53f2b3817d429bd9a74768eca50acd087. Feb 13 19:02:33.554411 systemd[1]: Started cri-containerd-c50fcef22ea3ef6588c2f787915066b2d1186ffbf821ab4bd589af42cbb9a083.scope - libcontainer container c50fcef22ea3ef6588c2f787915066b2d1186ffbf821ab4bd589af42cbb9a083. Feb 13 19:02:33.593577 containerd[1489]: time="2025-02-13T19:02:33.593525941Z" level=info msg="StartContainer for \"cd869304ddb79521b0f93656d35348bac00ab2f711ba622eec0d95a245ebfa79\" returns successfully" Feb 13 19:02:33.607816 containerd[1489]: time="2025-02-13T19:02:33.606566446Z" level=info msg="StartContainer for \"d4612f8e257229406f0f88f45ccfd2b53f2b3817d429bd9a74768eca50acd087\" returns successfully" Feb 13 19:02:33.607816 containerd[1489]: time="2025-02-13T19:02:33.606568446Z" level=info msg="StartContainer for \"c50fcef22ea3ef6588c2f787915066b2d1186ffbf821ab4bd589af42cbb9a083\" returns successfully" Feb 13 19:02:33.622583 kubelet[2318]: I0213 19:02:33.622531 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:33.623270 kubelet[2318]: E0213 19:02:33.623245 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Feb 13 19:02:33.692642 kubelet[2318]: W0213 19:02:33.692476 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:33.692642 kubelet[2318]: E0213 19:02:33.692548 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Feb 13 19:02:34.142940 kubelet[2318]: E0213 19:02:34.141186 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:34.143906 kubelet[2318]: E0213 19:02:34.143878 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:34.146145 kubelet[2318]: E0213 19:02:34.146035 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:35.147816 kubelet[2318]: E0213 19:02:35.147704 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:35.147816 kubelet[2318]: E0213 19:02:35.147729 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:35.148250 kubelet[2318]: E0213 19:02:35.147941 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:35.224603 kubelet[2318]: I0213 19:02:35.224562 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:35.542009 kubelet[2318]: E0213 19:02:35.541854 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:02:35.580275 kubelet[2318]: I0213 19:02:35.580228 2318 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:02:36.098852 kubelet[2318]: I0213 19:02:36.098744 2318 apiserver.go:52] "Watching apiserver" Feb 13 19:02:36.112207 kubelet[2318]: I0213 19:02:36.112179 2318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:02:36.734345 kubelet[2318]: E0213 19:02:36.734304 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:37.149362 kubelet[2318]: E0213 19:02:37.149181 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:37.865169 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Feb 13 19:02:37.865185 systemd[1]: Reloading... Feb 13 19:02:37.943121 zram_generator::config[2648]: No configuration found. Feb 13 19:02:38.025805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:38.110052 systemd[1]: Reloading finished in 244 ms. Feb 13 19:02:38.129901 kubelet[2318]: I0213 19:02:38.129436 2318 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:38.129673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:38.137322 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:38.137543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:38.137596 systemd[1]: kubelet.service: Consumed 1.594s CPU time, 116.4M memory peak. Feb 13 19:02:38.148288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:38.246409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:38.251376 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:38.290984 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:38.290984 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:38.290984 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:38.291352 kubelet[2687]: I0213 19:02:38.291031 2687 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:38.295576 kubelet[2687]: I0213 19:02:38.295536 2687 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:02:38.295576 kubelet[2687]: I0213 19:02:38.295565 2687 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:38.295932 kubelet[2687]: I0213 19:02:38.295768 2687 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:02:38.297176 kubelet[2687]: I0213 19:02:38.297136 2687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:02:38.301193 kubelet[2687]: I0213 19:02:38.301153 2687 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:38.307419 kubelet[2687]: I0213 19:02:38.306423 2687 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:38.307419 kubelet[2687]: I0213 19:02:38.306629 2687 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:38.307419 kubelet[2687]: I0213 19:02:38.306665 2687 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:02:38.307419 kubelet[2687]: I0213 19:02:38.306845 2687 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:38.307664 kubelet[2687]: I0213 19:02:38.306855 2687 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:02:38.307664 kubelet[2687]: I0213 19:02:38.306928 2687 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:38.307664 kubelet[2687]: I0213 19:02:38.307048 2687 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:02:38.307664 kubelet[2687]: I0213 19:02:38.307066 2687 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:38.307664 kubelet[2687]: I0213 19:02:38.307510 2687 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:02:38.310543 kubelet[2687]: I0213 19:02:38.308202 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:38.311492 kubelet[2687]: I0213 19:02:38.311472 2687 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:38.311985 kubelet[2687]: I0213 19:02:38.311969 2687 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:38.312465 kubelet[2687]: I0213 19:02:38.312451 2687 server.go:1264] "Started kubelet" Feb 13 19:02:38.313145 kubelet[2687]: I0213 19:02:38.313109 2687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:38.314009 kubelet[2687]: I0213 19:02:38.313979 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:38.314356 kubelet[2687]: I0213 19:02:38.314341 2687 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:02:38.314999 kubelet[2687]: I0213 19:02:38.314931 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:38.315289 kubelet[2687]: I0213 19:02:38.315162 2687 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:38.319274 kubelet[2687]: I0213 19:02:38.319250 2687 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:02:38.319834 kubelet[2687]: I0213 19:02:38.319816 2687 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:38.320267 kubelet[2687]: I0213 19:02:38.320253 2687 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:38.336700 kubelet[2687]: I0213 19:02:38.336647 2687 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:38.336804 kubelet[2687]: I0213 19:02:38.336770 2687 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:38.342569 kubelet[2687]: E0213 19:02:38.342538 2687 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:38.345181 kubelet[2687]: I0213 19:02:38.345142 2687 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:38.346627 kubelet[2687]: I0213 19:02:38.346572 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:38.348215 kubelet[2687]: I0213 19:02:38.347828 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:38.348215 kubelet[2687]: I0213 19:02:38.347886 2687 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:38.348215 kubelet[2687]: I0213 19:02:38.347910 2687 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:02:38.348215 kubelet[2687]: E0213 19:02:38.347958 2687 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:38.378432 kubelet[2687]: I0213 19:02:38.378405 2687 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:38.378432 kubelet[2687]: I0213 19:02:38.378424 2687 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:38.378576 kubelet[2687]: I0213 19:02:38.378446 2687 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:38.378656 kubelet[2687]: I0213 19:02:38.378606 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:02:38.378656 kubelet[2687]: I0213 19:02:38.378621 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:02:38.378656 kubelet[2687]: I0213 19:02:38.378641 2687 policy_none.go:49] "None policy: Start" Feb 13 19:02:38.379459 kubelet[2687]: I0213 19:02:38.379394 2687 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:38.379459 kubelet[2687]: I0213 19:02:38.379422 2687 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:38.379609 kubelet[2687]: I0213 19:02:38.379595 2687 state_mem.go:75] "Updated machine memory state" Feb 13 19:02:38.383878 kubelet[2687]: I0213 19:02:38.383763 2687 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:38.384196 kubelet[2687]: I0213 19:02:38.383997 2687 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:38.384196 kubelet[2687]: I0213 19:02:38.384103 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:38.423608 kubelet[2687]: I0213 19:02:38.423576 2687 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:02:38.431261 kubelet[2687]: I0213 19:02:38.431217 2687 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:02:38.431395 kubelet[2687]: I0213 19:02:38.431314 2687 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:02:38.448595 kubelet[2687]: I0213 19:02:38.448557 2687 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:02:38.448916 kubelet[2687]: I0213 19:02:38.448678 2687 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:02:38.448916 kubelet[2687]: I0213 19:02:38.448720 2687 topology_manager.go:215] "Topology Admit Handler" podUID="34f125a2ede4639301c884fcc298a839" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:02:38.454722 kubelet[2687]: E0213 19:02:38.454684 2687 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:38.621588 kubelet[2687]: I0213 19:02:38.621538 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:38.621588 kubelet[2687]: I0213 19:02:38.621580 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:38.621588 kubelet[2687]: I0213 19:02:38.621603 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:38.622562 kubelet[2687]: I0213 19:02:38.621621 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:38.622562 kubelet[2687]: I0213 19:02:38.621636 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:38.622562 kubelet[2687]: I0213 19:02:38.621653 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:38.622562 kubelet[2687]: I0213 19:02:38.621670 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:38.622562 kubelet[2687]: I0213 19:02:38.621692 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:38.622689 kubelet[2687]: I0213 19:02:38.621706 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34f125a2ede4639301c884fcc298a839-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34f125a2ede4639301c884fcc298a839\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:38.756282 kubelet[2687]: E0213 19:02:38.755986 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:38.756282 kubelet[2687]: E0213 19:02:38.756071 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:38.756282 kubelet[2687]: E0213 19:02:38.756209 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:38.869033 sudo[2723]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:02:38.869329 sudo[2723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:02:39.311345 kubelet[2687]: I0213 19:02:39.311291 2687 apiserver.go:52] "Watching apiserver" Feb 13 19:02:39.315022 sudo[2723]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:39.321558 kubelet[2687]: I0213 19:02:39.321308 2687 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:02:39.366835 kubelet[2687]: E0213 19:02:39.363658 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:39.371384 kubelet[2687]: E0213 19:02:39.371315 2687 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:39.371820 kubelet[2687]: E0213 19:02:39.371756 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:39.373156 kubelet[2687]: E0213 19:02:39.372798 2687 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:39.373768 kubelet[2687]: E0213 19:02:39.373741 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:39.398739 kubelet[2687]: I0213 19:02:39.398633 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.398593728 podStartE2EDuration="1.398593728s" podCreationTimestamp="2025-02-13 19:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:39.38755618 +0000 UTC m=+1.133079877" watchObservedRunningTime="2025-02-13 19:02:39.398593728 +0000 UTC m=+1.144117385" Feb 13 19:02:39.406563 kubelet[2687]: I0213 19:02:39.406506 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.406485949 podStartE2EDuration="3.406485949s" podCreationTimestamp="2025-02-13 19:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:39.39880666 +0000 UTC m=+1.144330357" watchObservedRunningTime="2025-02-13 19:02:39.406485949 +0000 UTC m=+1.152009686" Feb 13 19:02:39.418071 kubelet[2687]: I0213 19:02:39.417980 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4179438389999999 podStartE2EDuration="1.417943839s" podCreationTimestamp="2025-02-13 19:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:39.40669984 +0000 UTC m=+1.152223497" watchObservedRunningTime="2025-02-13 19:02:39.417943839 +0000 UTC m=+1.163467536" Feb 13 19:02:40.365466 kubelet[2687]: E0213 19:02:40.365336 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:40.365466 kubelet[2687]: E0213 19:02:40.365389 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:40.713220 sudo[1677]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:40.715003 sshd[1676]: Connection closed by 10.0.0.1 port 53166 Feb 13 19:02:40.718267 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:40.721336 systemd[1]: sshd@6-10.0.0.40:22-10.0.0.1:53166.service: Deactivated successfully. Feb 13 19:02:40.723311 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:02:40.723506 systemd[1]: session-7.scope: Consumed 7.737s CPU time, 289M memory peak. Feb 13 19:02:40.725370 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:02:40.726544 systemd-logind[1481]: Removed session 7. Feb 13 19:02:41.629710 kubelet[2687]: E0213 19:02:41.629666 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:45.325988 kubelet[2687]: E0213 19:02:45.325930 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:45.371891 kubelet[2687]: E0213 19:02:45.371805 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:47.802274 kubelet[2687]: E0213 19:02:47.802222 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:48.377241 kubelet[2687]: E0213 19:02:48.376848 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:50.118095 update_engine[1483]: I20250213 19:02:50.118026 1483 update_attempter.cc:509] Updating boot flags... Feb 13 19:02:50.154895 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2774) Feb 13 19:02:50.200462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2778) Feb 13 19:02:51.642889 kubelet[2687]: E0213 19:02:51.641937 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:52.383364 kubelet[2687]: E0213 19:02:52.383320 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:52.571849 kubelet[2687]: I0213 19:02:52.571813 2687 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:02:52.577433 containerd[1489]: time="2025-02-13T19:02:52.577393639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:02:52.577838 kubelet[2687]: I0213 19:02:52.577683 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:02:53.041585 kubelet[2687]: I0213 19:02:53.041511 2687 topology_manager.go:215] "Topology Admit Handler" podUID="ccfd7578-a5e7-4be2-9f93-67bdc9ff712b" podNamespace="kube-system" podName="kube-proxy-bqxj9" Feb 13 19:02:53.059969 kubelet[2687]: I0213 19:02:53.059910 2687 topology_manager.go:215] "Topology Admit Handler" podUID="709d5515-9f41-4e18-98a3-131705548c6b" podNamespace="kube-system" podName="cilium-4h9mh" Feb 13 19:02:53.071312 systemd[1]: Created slice kubepods-besteffort-podccfd7578_a5e7_4be2_9f93_67bdc9ff712b.slice - libcontainer container kubepods-besteffort-podccfd7578_a5e7_4be2_9f93_67bdc9ff712b.slice. Feb 13 19:02:53.090251 systemd[1]: Created slice kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice - libcontainer container kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice. Feb 13 19:02:53.221996 kubelet[2687]: I0213 19:02:53.221950 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445jb\" (UniqueName: \"kubernetes.io/projected/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-kube-api-access-445jb\") pod \"kube-proxy-bqxj9\" (UID: \"ccfd7578-a5e7-4be2-9f93-67bdc9ff712b\") " pod="kube-system/kube-proxy-bqxj9" Feb 13 19:02:53.221996 kubelet[2687]: I0213 19:02:53.221997 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cni-path\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222017 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-hubble-tls\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222037 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-etc-cni-netd\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222054 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-net\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222069 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-bpf-maps\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222083 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-lib-modules\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222153 kubelet[2687]: I0213 19:02:53.222099 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmpk\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222138 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-lib-modules\") pod \"kube-proxy-bqxj9\" (UID: \"ccfd7578-a5e7-4be2-9f93-67bdc9ff712b\") " pod="kube-system/kube-proxy-bqxj9" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222181 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/709d5515-9f41-4e18-98a3-131705548c6b-clustermesh-secrets\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222201 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-kube-proxy\") pod \"kube-proxy-bqxj9\" (UID: \"ccfd7578-a5e7-4be2-9f93-67bdc9ff712b\") " pod="kube-system/kube-proxy-bqxj9" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222228 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-xtables-lock\") pod \"kube-proxy-bqxj9\" (UID: \"ccfd7578-a5e7-4be2-9f93-67bdc9ff712b\") " pod="kube-system/kube-proxy-bqxj9" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222247 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-run\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222289 kubelet[2687]: I0213 19:02:53.222264 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-hostproc\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222413 kubelet[2687]: I0213 19:02:53.222281 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-cgroup\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222413 kubelet[2687]: I0213 19:02:53.222298 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d5515-9f41-4e18-98a3-131705548c6b-cilium-config-path\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222413 kubelet[2687]: I0213 19:02:53.222313 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-kernel\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.222413 kubelet[2687]: I0213 19:02:53.222327 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-xtables-lock\") pod \"cilium-4h9mh\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " pod="kube-system/cilium-4h9mh" Feb 13 19:02:53.336466 kubelet[2687]: E0213 19:02:53.333567 2687 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:53.336466 kubelet[2687]: E0213 19:02:53.333602 2687 projected.go:200] Error preparing data for projected volume kube-api-access-gpmpk for pod kube-system/cilium-4h9mh: configmap "kube-root-ca.crt" not found Feb 13 19:02:53.336466 kubelet[2687]: E0213 19:02:53.333648 2687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk podName:709d5515-9f41-4e18-98a3-131705548c6b nodeName:}" failed. No retries permitted until 2025-02-13 19:02:53.833629763 +0000 UTC m=+15.579153500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gpmpk" (UniqueName: "kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk") pod "cilium-4h9mh" (UID: "709d5515-9f41-4e18-98a3-131705548c6b") : configmap "kube-root-ca.crt" not found Feb 13 19:02:53.337380 kubelet[2687]: E0213 19:02:53.337341 2687 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:53.337380 kubelet[2687]: E0213 19:02:53.337365 2687 projected.go:200] Error preparing data for projected volume kube-api-access-445jb for pod kube-system/kube-proxy-bqxj9: configmap "kube-root-ca.crt" not found Feb 13 19:02:53.337462 kubelet[2687]: E0213 19:02:53.337404 2687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-kube-api-access-445jb podName:ccfd7578-a5e7-4be2-9f93-67bdc9ff712b nodeName:}" failed. No retries permitted until 2025-02-13 19:02:53.837388139 +0000 UTC m=+15.582911796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-445jb" (UniqueName: "kubernetes.io/projected/ccfd7578-a5e7-4be2-9f93-67bdc9ff712b-kube-api-access-445jb") pod "kube-proxy-bqxj9" (UID: "ccfd7578-a5e7-4be2-9f93-67bdc9ff712b") : configmap "kube-root-ca.crt" not found Feb 13 19:02:53.646043 kubelet[2687]: I0213 19:02:53.645336 2687 topology_manager.go:215] "Topology Admit Handler" podUID="e45be814-07d1-456d-a1af-eeb5cc0d14a8" podNamespace="kube-system" podName="cilium-operator-599987898-rmjnc" Feb 13 19:02:53.662776 systemd[1]: Created slice kubepods-besteffort-pode45be814_07d1_456d_a1af_eeb5cc0d14a8.slice - libcontainer container kubepods-besteffort-pode45be814_07d1_456d_a1af_eeb5cc0d14a8.slice. Feb 13 19:02:53.829071 kubelet[2687]: I0213 19:02:53.828989 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e45be814-07d1-456d-a1af-eeb5cc0d14a8-cilium-config-path\") pod \"cilium-operator-599987898-rmjnc\" (UID: \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\") " pod="kube-system/cilium-operator-599987898-rmjnc" Feb 13 19:02:53.829071 kubelet[2687]: I0213 19:02:53.829029 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7xk\" (UniqueName: \"kubernetes.io/projected/e45be814-07d1-456d-a1af-eeb5cc0d14a8-kube-api-access-kv7xk\") pod \"cilium-operator-599987898-rmjnc\" (UID: \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\") " pod="kube-system/cilium-operator-599987898-rmjnc" Feb 13 19:02:53.966388 kubelet[2687]: E0213 19:02:53.966353 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:53.973927 containerd[1489]: time="2025-02-13T19:02:53.973561898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rmjnc,Uid:e45be814-07d1-456d-a1af-eeb5cc0d14a8,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:53.987154 kubelet[2687]: E0213 19:02:53.987115 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:53.989604 containerd[1489]: time="2025-02-13T19:02:53.989203457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqxj9,Uid:ccfd7578-a5e7-4be2-9f93-67bdc9ff712b,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:53.993551 kubelet[2687]: E0213 19:02:53.993527 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:53.995101 containerd[1489]: time="2025-02-13T19:02:53.993970459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h9mh,Uid:709d5515-9f41-4e18-98a3-131705548c6b,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:53.998571 containerd[1489]: time="2025-02-13T19:02:53.998441293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:53.999331 containerd[1489]: time="2025-02-13T19:02:53.999076909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:53.999331 containerd[1489]: time="2025-02-13T19:02:53.999114590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:53.999331 containerd[1489]: time="2025-02-13T19:02:53.999232273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.011839 containerd[1489]: time="2025-02-13T19:02:54.011696299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:54.011839 containerd[1489]: time="2025-02-13T19:02:54.011766300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:54.011839 containerd[1489]: time="2025-02-13T19:02:54.011785541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.012101 containerd[1489]: time="2025-02-13T19:02:54.011912584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.022051 systemd[1]: Started cri-containerd-a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a.scope - libcontainer container a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a. Feb 13 19:02:54.028673 containerd[1489]: time="2025-02-13T19:02:54.028328624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:54.028673 containerd[1489]: time="2025-02-13T19:02:54.028570949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:54.028673 containerd[1489]: time="2025-02-13T19:02:54.028590870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.029131 containerd[1489]: time="2025-02-13T19:02:54.028675592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.029310 systemd[1]: Started cri-containerd-6fc6ae86d268053cbbbf498ab2028a5f6cf5141eba68bd435fed75863142729e.scope - libcontainer container 6fc6ae86d268053cbbbf498ab2028a5f6cf5141eba68bd435fed75863142729e. Feb 13 19:02:54.055185 systemd[1]: Started cri-containerd-0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f.scope - libcontainer container 0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f. Feb 13 19:02:54.070614 containerd[1489]: time="2025-02-13T19:02:54.070566972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqxj9,Uid:ccfd7578-a5e7-4be2-9f93-67bdc9ff712b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fc6ae86d268053cbbbf498ab2028a5f6cf5141eba68bd435fed75863142729e\"" Feb 13 19:02:54.073673 containerd[1489]: time="2025-02-13T19:02:54.073640167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rmjnc,Uid:e45be814-07d1-456d-a1af-eeb5cc0d14a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\"" Feb 13 19:02:54.074407 kubelet[2687]: E0213 19:02:54.074383 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.074881 kubelet[2687]: E0213 19:02:54.074808 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.078562 containerd[1489]: time="2025-02-13T19:02:54.078524486Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:02:54.080794 containerd[1489]: time="2025-02-13T19:02:54.080757140Z" level=info msg="CreateContainer within sandbox \"6fc6ae86d268053cbbbf498ab2028a5f6cf5141eba68bd435fed75863142729e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:02:54.089981 containerd[1489]: time="2025-02-13T19:02:54.089937243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h9mh,Uid:709d5515-9f41-4e18-98a3-131705548c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\"" Feb 13 19:02:54.090947 kubelet[2687]: E0213 19:02:54.090845 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.104866 containerd[1489]: time="2025-02-13T19:02:54.104807085Z" level=info msg="CreateContainer within sandbox \"6fc6ae86d268053cbbbf498ab2028a5f6cf5141eba68bd435fed75863142729e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c68d1211345741025b0349473a24b4d1373d10a86433a4b819675b238153fd43\"" Feb 13 19:02:54.107128 containerd[1489]: time="2025-02-13T19:02:54.106918657Z" level=info msg="StartContainer for \"c68d1211345741025b0349473a24b4d1373d10a86433a4b819675b238153fd43\"" Feb 13 19:02:54.132036 systemd[1]: Started cri-containerd-c68d1211345741025b0349473a24b4d1373d10a86433a4b819675b238153fd43.scope - libcontainer container c68d1211345741025b0349473a24b4d1373d10a86433a4b819675b238153fd43. Feb 13 19:02:54.174458 containerd[1489]: time="2025-02-13T19:02:54.174410980Z" level=info msg="StartContainer for \"c68d1211345741025b0349473a24b4d1373d10a86433a4b819675b238153fd43\" returns successfully" Feb 13 19:02:54.391027 kubelet[2687]: E0213 19:02:54.389957 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:57.068001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548705209.mount: Deactivated successfully. Feb 13 19:02:58.373075 kubelet[2687]: I0213 19:02:58.372189 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqxj9" podStartSLOduration=5.372171559 podStartE2EDuration="5.372171559s" podCreationTimestamp="2025-02-13 19:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:54.399597262 +0000 UTC m=+16.145120959" watchObservedRunningTime="2025-02-13 19:02:58.372171559 +0000 UTC m=+20.117695256" Feb 13 19:02:58.676069 containerd[1489]: time="2025-02-13T19:02:58.675957690Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:58.676610 containerd[1489]: time="2025-02-13T19:02:58.676561263Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:02:58.677502 containerd[1489]: time="2025-02-13T19:02:58.677457961Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:58.679618 containerd[1489]: time="2025-02-13T19:02:58.679586164Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.601018838s" Feb 13 19:02:58.679698 containerd[1489]: time="2025-02-13T19:02:58.679621405Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:02:58.681691 containerd[1489]: time="2025-02-13T19:02:58.681667126Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:02:58.684398 containerd[1489]: time="2025-02-13T19:02:58.684277099Z" level=info msg="CreateContainer within sandbox \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:02:58.696531 containerd[1489]: time="2025-02-13T19:02:58.696483147Z" level=info msg="CreateContainer within sandbox \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\"" Feb 13 19:02:58.697246 containerd[1489]: time="2025-02-13T19:02:58.697187202Z" level=info msg="StartContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\"" Feb 13 19:02:58.715876 systemd[1]: run-containerd-runc-k8s.io-4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9-runc.gszDWN.mount: Deactivated successfully. Feb 13 19:02:58.727030 systemd[1]: Started cri-containerd-4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9.scope - libcontainer container 4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9. Feb 13 19:02:58.747959 containerd[1489]: time="2025-02-13T19:02:58.747911832Z" level=info msg="StartContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" returns successfully" Feb 13 19:02:59.421747 kubelet[2687]: E0213 19:02:59.421673 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.422677 kubelet[2687]: E0213 19:03:00.422644 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:03.070496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587825715.mount: Deactivated successfully. Feb 13 19:03:05.659049 systemd[1]: Started sshd@7-10.0.0.40:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). Feb 13 19:03:05.731048 sshd[3151]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:05.732535 sshd-session[3151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:05.738682 systemd-logind[1481]: New session 8 of user core. Feb 13 19:03:05.749208 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:05.907736 sshd[3153]: Connection closed by 10.0.0.1 port 54668 Feb 13 19:03:05.908183 sshd-session[3151]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:05.912476 systemd[1]: sshd@7-10.0.0.40:22-10.0.0.1:54668.service: Deactivated successfully. Feb 13 19:03:05.914720 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:05.916032 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:05.917307 systemd-logind[1481]: Removed session 8. Feb 13 19:03:06.384872 containerd[1489]: time="2025-02-13T19:03:06.384807293Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:06.385384 containerd[1489]: time="2025-02-13T19:03:06.385324661Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:03:06.386397 containerd[1489]: time="2025-02-13T19:03:06.386356356Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:06.388601 containerd[1489]: time="2025-02-13T19:03:06.388562949Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.706767099s" Feb 13 19:03:06.388665 containerd[1489]: time="2025-02-13T19:03:06.388602429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:03:06.391662 containerd[1489]: time="2025-02-13T19:03:06.391613514Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:06.413051 containerd[1489]: time="2025-02-13T19:03:06.413002590Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\"" Feb 13 19:03:06.413814 containerd[1489]: time="2025-02-13T19:03:06.413789002Z" level=info msg="StartContainer for \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\"" Feb 13 19:03:06.443826 systemd[1]: run-containerd-runc-k8s.io-0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170-runc.PCQcaF.mount: Deactivated successfully. Feb 13 19:03:06.456070 systemd[1]: Started cri-containerd-0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170.scope - libcontainer container 0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170. Feb 13 19:03:06.506142 containerd[1489]: time="2025-02-13T19:03:06.506096128Z" level=info msg="StartContainer for \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\" returns successfully" Feb 13 19:03:06.586654 systemd[1]: cri-containerd-0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170.scope: Deactivated successfully. Feb 13 19:03:06.607247 containerd[1489]: time="2025-02-13T19:03:06.607181944Z" level=info msg="shim disconnected" id=0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170 namespace=k8s.io Feb 13 19:03:06.607247 containerd[1489]: time="2025-02-13T19:03:06.607238864Z" level=warning msg="cleaning up after shim disconnected" id=0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170 namespace=k8s.io Feb 13 19:03:06.607247 containerd[1489]: time="2025-02-13T19:03:06.607247625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:07.409679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170-rootfs.mount: Deactivated successfully. Feb 13 19:03:07.450186 kubelet[2687]: E0213 19:03:07.450157 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:07.452937 containerd[1489]: time="2025-02-13T19:03:07.452891070Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:07.467866 kubelet[2687]: I0213 19:03:07.467797 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rmjnc" podStartSLOduration=9.863087243 podStartE2EDuration="14.467780843s" podCreationTimestamp="2025-02-13 19:02:53 +0000 UTC" firstStartedPulling="2025-02-13 19:02:54.076835524 +0000 UTC m=+15.822359221" lastFinishedPulling="2025-02-13 19:02:58.681529124 +0000 UTC m=+20.427052821" observedRunningTime="2025-02-13 19:02:59.435821076 +0000 UTC m=+21.181344773" watchObservedRunningTime="2025-02-13 19:03:07.467780843 +0000 UTC m=+29.213304540" Feb 13 19:03:07.470598 containerd[1489]: time="2025-02-13T19:03:07.470547842Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\"" Feb 13 19:03:07.471287 containerd[1489]: time="2025-02-13T19:03:07.471094210Z" level=info msg="StartContainer for \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\"" Feb 13 19:03:07.500070 systemd[1]: Started cri-containerd-bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510.scope - libcontainer container bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510. Feb 13 19:03:07.523981 containerd[1489]: time="2025-02-13T19:03:07.523911364Z" level=info msg="StartContainer for \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\" returns successfully" Feb 13 19:03:07.543235 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:07.543875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:07.544036 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:07.550252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:07.551761 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:03:07.552166 systemd[1]: cri-containerd-bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510.scope: Deactivated successfully. Feb 13 19:03:07.574088 containerd[1489]: time="2025-02-13T19:03:07.574014320Z" level=info msg="shim disconnected" id=bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510 namespace=k8s.io Feb 13 19:03:07.574088 containerd[1489]: time="2025-02-13T19:03:07.574081881Z" level=warning msg="cleaning up after shim disconnected" id=bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510 namespace=k8s.io Feb 13 19:03:07.574088 containerd[1489]: time="2025-02-13T19:03:07.574090721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:07.576263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:08.409728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510-rootfs.mount: Deactivated successfully. Feb 13 19:03:08.453948 kubelet[2687]: E0213 19:03:08.453918 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:08.457024 containerd[1489]: time="2025-02-13T19:03:08.456986998Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:08.479827 containerd[1489]: time="2025-02-13T19:03:08.479742872Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\"" Feb 13 19:03:08.486606 containerd[1489]: time="2025-02-13T19:03:08.481288814Z" level=info msg="StartContainer for \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\"" Feb 13 19:03:08.520040 systemd[1]: Started cri-containerd-69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69.scope - libcontainer container 69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69. Feb 13 19:03:08.553310 containerd[1489]: time="2025-02-13T19:03:08.553167686Z" level=info msg="StartContainer for \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\" returns successfully" Feb 13 19:03:08.574386 systemd[1]: cri-containerd-69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69.scope: Deactivated successfully. Feb 13 19:03:08.605614 containerd[1489]: time="2025-02-13T19:03:08.605370167Z" level=info msg="shim disconnected" id=69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69 namespace=k8s.io Feb 13 19:03:08.605614 containerd[1489]: time="2025-02-13T19:03:08.605424488Z" level=warning msg="cleaning up after shim disconnected" id=69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69 namespace=k8s.io Feb 13 19:03:08.605614 containerd[1489]: time="2025-02-13T19:03:08.605433648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:09.409609 systemd[1]: run-containerd-runc-k8s.io-69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69-runc.y33Q3X.mount: Deactivated successfully. Feb 13 19:03:09.409710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69-rootfs.mount: Deactivated successfully. Feb 13 19:03:09.458555 kubelet[2687]: E0213 19:03:09.458074 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:09.462121 containerd[1489]: time="2025-02-13T19:03:09.462087591Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:09.482148 containerd[1489]: time="2025-02-13T19:03:09.482023577Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\"" Feb 13 19:03:09.483566 containerd[1489]: time="2025-02-13T19:03:09.482731986Z" level=info msg="StartContainer for \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\"" Feb 13 19:03:09.487827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015081969.mount: Deactivated successfully. Feb 13 19:03:09.513413 systemd[1]: Started cri-containerd-68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6.scope - libcontainer container 68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6. Feb 13 19:03:09.540349 systemd[1]: cri-containerd-68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6.scope: Deactivated successfully. Feb 13 19:03:09.541829 containerd[1489]: time="2025-02-13T19:03:09.541584173Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice/cri-containerd-68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6.scope/memory.events\": no such file or directory" Feb 13 19:03:09.544553 containerd[1489]: time="2025-02-13T19:03:09.544498492Z" level=info msg="StartContainer for \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\" returns successfully" Feb 13 19:03:09.570237 containerd[1489]: time="2025-02-13T19:03:09.570180715Z" level=info msg="shim disconnected" id=68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6 namespace=k8s.io Feb 13 19:03:09.570595 containerd[1489]: time="2025-02-13T19:03:09.570419078Z" level=warning msg="cleaning up after shim disconnected" id=68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6 namespace=k8s.io Feb 13 19:03:09.570595 containerd[1489]: time="2025-02-13T19:03:09.570434998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:10.409754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6-rootfs.mount: Deactivated successfully. Feb 13 19:03:10.462561 kubelet[2687]: E0213 19:03:10.461575 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:10.466527 containerd[1489]: time="2025-02-13T19:03:10.466354093Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:10.489045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030165779.mount: Deactivated successfully. Feb 13 19:03:10.496416 containerd[1489]: time="2025-02-13T19:03:10.496258320Z" level=info msg="CreateContainer within sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\"" Feb 13 19:03:10.497503 containerd[1489]: time="2025-02-13T19:03:10.497035850Z" level=info msg="StartContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\"" Feb 13 19:03:10.542076 systemd[1]: Started cri-containerd-730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722.scope - libcontainer container 730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722. Feb 13 19:03:10.571996 containerd[1489]: time="2025-02-13T19:03:10.569239904Z" level=info msg="StartContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" returns successfully" Feb 13 19:03:10.733592 kubelet[2687]: I0213 19:03:10.733551 2687 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:03:10.758045 kubelet[2687]: I0213 19:03:10.757113 2687 topology_manager.go:215] "Topology Admit Handler" podUID="d67326a4-29cd-4136-a74d-960a657e68a1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vml9q" Feb 13 19:03:10.758045 kubelet[2687]: I0213 19:03:10.757525 2687 topology_manager.go:215] "Topology Admit Handler" podUID="5502292f-6954-420b-bffb-17501452e57e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zfv7j" Feb 13 19:03:10.772058 systemd[1]: Created slice kubepods-burstable-podd67326a4_29cd_4136_a74d_960a657e68a1.slice - libcontainer container kubepods-burstable-podd67326a4_29cd_4136_a74d_960a657e68a1.slice. Feb 13 19:03:10.778323 systemd[1]: Created slice kubepods-burstable-pod5502292f_6954_420b_bffb_17501452e57e.slice - libcontainer container kubepods-burstable-pod5502292f_6954_420b_bffb_17501452e57e.slice. Feb 13 19:03:10.852198 kubelet[2687]: I0213 19:03:10.852057 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d67326a4-29cd-4136-a74d-960a657e68a1-config-volume\") pod \"coredns-7db6d8ff4d-vml9q\" (UID: \"d67326a4-29cd-4136-a74d-960a657e68a1\") " pod="kube-system/coredns-7db6d8ff4d-vml9q" Feb 13 19:03:10.852198 kubelet[2687]: I0213 19:03:10.852107 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfx9\" (UniqueName: \"kubernetes.io/projected/d67326a4-29cd-4136-a74d-960a657e68a1-kube-api-access-cbfx9\") pod \"coredns-7db6d8ff4d-vml9q\" (UID: \"d67326a4-29cd-4136-a74d-960a657e68a1\") " pod="kube-system/coredns-7db6d8ff4d-vml9q" Feb 13 19:03:10.852198 kubelet[2687]: I0213 19:03:10.852133 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5502292f-6954-420b-bffb-17501452e57e-config-volume\") pod \"coredns-7db6d8ff4d-zfv7j\" (UID: \"5502292f-6954-420b-bffb-17501452e57e\") " pod="kube-system/coredns-7db6d8ff4d-zfv7j" Feb 13 19:03:10.852198 kubelet[2687]: I0213 19:03:10.852155 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxlsk\" (UniqueName: \"kubernetes.io/projected/5502292f-6954-420b-bffb-17501452e57e-kube-api-access-vxlsk\") pod \"coredns-7db6d8ff4d-zfv7j\" (UID: \"5502292f-6954-420b-bffb-17501452e57e\") " pod="kube-system/coredns-7db6d8ff4d-zfv7j" Feb 13 19:03:10.938393 systemd[1]: Started sshd@8-10.0.0.40:22-10.0.0.1:54674.service - OpenSSH per-connection server daemon (10.0.0.1:54674). Feb 13 19:03:10.987956 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 54674 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:10.990199 sshd-session[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:10.995337 systemd-logind[1481]: New session 9 of user core. Feb 13 19:03:11.003089 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:11.078078 kubelet[2687]: E0213 19:03:11.078014 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:11.079983 containerd[1489]: time="2025-02-13T19:03:11.079943281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vml9q,Uid:d67326a4-29cd-4136-a74d-960a657e68a1,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:11.081628 kubelet[2687]: E0213 19:03:11.081600 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:11.082162 containerd[1489]: time="2025-02-13T19:03:11.082089028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfv7j,Uid:5502292f-6954-420b-bffb-17501452e57e,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:11.173676 sshd[3497]: Connection closed by 10.0.0.1 port 54674 Feb 13 19:03:11.174458 sshd-session[3487]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:11.216356 systemd[1]: sshd@8-10.0.0.40:22-10.0.0.1:54674.service: Deactivated successfully. Feb 13 19:03:11.220016 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:11.231201 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:11.280193 systemd-logind[1481]: Removed session 9. Feb 13 19:03:11.467071 kubelet[2687]: E0213 19:03:11.467026 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:12.468915 kubelet[2687]: E0213 19:03:12.468847 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:12.951592 systemd-networkd[1410]: cilium_host: Link UP Feb 13 19:03:12.951718 systemd-networkd[1410]: cilium_net: Link UP Feb 13 19:03:12.952838 systemd-networkd[1410]: cilium_net: Gained carrier Feb 13 19:03:12.953104 systemd-networkd[1410]: cilium_host: Gained carrier Feb 13 19:03:12.953283 systemd-networkd[1410]: cilium_net: Gained IPv6LL Feb 13 19:03:12.953488 systemd-networkd[1410]: cilium_host: Gained IPv6LL Feb 13 19:03:13.052279 systemd-networkd[1410]: cilium_vxlan: Link UP Feb 13 19:03:13.052290 systemd-networkd[1410]: cilium_vxlan: Gained carrier Feb 13 19:03:13.470760 kubelet[2687]: E0213 19:03:13.470480 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:13.481915 kernel: NET: Registered PF_ALG protocol family Feb 13 19:03:14.155389 systemd-networkd[1410]: lxc_health: Link UP Feb 13 19:03:14.164372 systemd-networkd[1410]: lxc_health: Gained carrier Feb 13 19:03:14.322895 systemd-networkd[1410]: lxc92d1d5e2c053: Link UP Feb 13 19:03:14.331915 kernel: eth0: renamed from tmp3d64b Feb 13 19:03:14.337137 systemd-networkd[1410]: lxc92d1d5e2c053: Gained carrier Feb 13 19:03:14.350614 systemd-networkd[1410]: lxcbe35b4326bb8: Link UP Feb 13 19:03:14.356014 kernel: eth0: renamed from tmp10a15 Feb 13 19:03:14.366115 systemd-networkd[1410]: lxcbe35b4326bb8: Gained carrier Feb 13 19:03:14.861052 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL Feb 13 19:03:15.885068 systemd-networkd[1410]: lxcbe35b4326bb8: Gained IPv6LL Feb 13 19:03:15.999681 kubelet[2687]: E0213 19:03:15.999628 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:16.017011 kubelet[2687]: I0213 19:03:16.016949 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4h9mh" podStartSLOduration=10.720289111 podStartE2EDuration="23.016932562s" podCreationTimestamp="2025-02-13 19:02:53 +0000 UTC" firstStartedPulling="2025-02-13 19:02:54.092534867 +0000 UTC m=+15.838058564" lastFinishedPulling="2025-02-13 19:03:06.389178318 +0000 UTC m=+28.134702015" observedRunningTime="2025-02-13 19:03:11.507151801 +0000 UTC m=+33.252675498" watchObservedRunningTime="2025-02-13 19:03:16.016932562 +0000 UTC m=+37.762456259" Feb 13 19:03:16.077000 systemd-networkd[1410]: lxc_health: Gained IPv6LL Feb 13 19:03:16.077332 systemd-networkd[1410]: lxc92d1d5e2c053: Gained IPv6LL Feb 13 19:03:16.186136 systemd[1]: Started sshd@9-10.0.0.40:22-10.0.0.1:37246.service - OpenSSH per-connection server daemon (10.0.0.1:37246). Feb 13 19:03:16.234690 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 37246 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:16.236328 sshd-session[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:16.245461 systemd-logind[1481]: New session 10 of user core. Feb 13 19:03:16.257176 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:16.407408 sshd[3957]: Connection closed by 10.0.0.1 port 37246 Feb 13 19:03:16.408025 sshd-session[3955]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:16.412281 systemd[1]: sshd@9-10.0.0.40:22-10.0.0.1:37246.service: Deactivated successfully. Feb 13 19:03:16.415236 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:16.416423 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:16.417804 systemd-logind[1481]: Removed session 10. Feb 13 19:03:16.548855 kubelet[2687]: I0213 19:03:16.548714 2687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:03:16.549692 kubelet[2687]: E0213 19:03:16.549654 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:17.488274 kubelet[2687]: E0213 19:03:17.487929 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.245689 containerd[1489]: time="2025-02-13T19:03:18.245576333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:18.245689 containerd[1489]: time="2025-02-13T19:03:18.245658214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:18.245689 containerd[1489]: time="2025-02-13T19:03:18.245674734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:18.246791 containerd[1489]: time="2025-02-13T19:03:18.246742826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:18.247182 containerd[1489]: time="2025-02-13T19:03:18.247104669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:18.247182 containerd[1489]: time="2025-02-13T19:03:18.247173830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:18.248200 containerd[1489]: time="2025-02-13T19:03:18.247189870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:18.248200 containerd[1489]: time="2025-02-13T19:03:18.247296591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:18.277086 systemd[1]: Started cri-containerd-10a15fbc8edf659980814521fa005ad45010be3e9f53b11a609339e5111a3cd3.scope - libcontainer container 10a15fbc8edf659980814521fa005ad45010be3e9f53b11a609339e5111a3cd3. Feb 13 19:03:18.279721 systemd[1]: Started cri-containerd-3d64bf0a67ee8ed3cca1b3e6f2c6c67e8805e7c15e5e2f58584de9da8bebcdc4.scope - libcontainer container 3d64bf0a67ee8ed3cca1b3e6f2c6c67e8805e7c15e5e2f58584de9da8bebcdc4. Feb 13 19:03:18.291660 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:18.294920 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:18.314269 containerd[1489]: time="2025-02-13T19:03:18.314211807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfv7j,Uid:5502292f-6954-420b-bffb-17501452e57e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d64bf0a67ee8ed3cca1b3e6f2c6c67e8805e7c15e5e2f58584de9da8bebcdc4\"" Feb 13 19:03:18.316140 kubelet[2687]: E0213 19:03:18.316110 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.318724 containerd[1489]: time="2025-02-13T19:03:18.318663333Z" level=info msg="CreateContainer within sandbox \"3d64bf0a67ee8ed3cca1b3e6f2c6c67e8805e7c15e5e2f58584de9da8bebcdc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:18.322186 containerd[1489]: time="2025-02-13T19:03:18.321585444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vml9q,Uid:d67326a4-29cd-4136-a74d-960a657e68a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"10a15fbc8edf659980814521fa005ad45010be3e9f53b11a609339e5111a3cd3\"" Feb 13 19:03:18.323948 kubelet[2687]: E0213 19:03:18.323839 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.326327 containerd[1489]: time="2025-02-13T19:03:18.326289293Z" level=info msg="CreateContainer within sandbox \"10a15fbc8edf659980814521fa005ad45010be3e9f53b11a609339e5111a3cd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:18.340252 containerd[1489]: time="2025-02-13T19:03:18.340183797Z" level=info msg="CreateContainer within sandbox \"3d64bf0a67ee8ed3cca1b3e6f2c6c67e8805e7c15e5e2f58584de9da8bebcdc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"131e49f1daa0693f603d35970efe280b5307c9f9f6114e4dcf413d5f4fe4c893\"" Feb 13 19:03:18.341085 containerd[1489]: time="2025-02-13T19:03:18.340989525Z" level=info msg="StartContainer for \"131e49f1daa0693f603d35970efe280b5307c9f9f6114e4dcf413d5f4fe4c893\"" Feb 13 19:03:18.344112 containerd[1489]: time="2025-02-13T19:03:18.344074877Z" level=info msg="CreateContainer within sandbox \"10a15fbc8edf659980814521fa005ad45010be3e9f53b11a609339e5111a3cd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f56c2fa0402c8c73affc6944c59c4e8f557122e6d11559f91ad6be9e0d67690\"" Feb 13 19:03:18.344893 containerd[1489]: time="2025-02-13T19:03:18.344587043Z" level=info msg="StartContainer for \"5f56c2fa0402c8c73affc6944c59c4e8f557122e6d11559f91ad6be9e0d67690\"" Feb 13 19:03:18.371028 systemd[1]: Started cri-containerd-131e49f1daa0693f603d35970efe280b5307c9f9f6114e4dcf413d5f4fe4c893.scope - libcontainer container 131e49f1daa0693f603d35970efe280b5307c9f9f6114e4dcf413d5f4fe4c893. Feb 13 19:03:18.374460 systemd[1]: Started cri-containerd-5f56c2fa0402c8c73affc6944c59c4e8f557122e6d11559f91ad6be9e0d67690.scope - libcontainer container 5f56c2fa0402c8c73affc6944c59c4e8f557122e6d11559f91ad6be9e0d67690. Feb 13 19:03:18.422132 containerd[1489]: time="2025-02-13T19:03:18.422008688Z" level=info msg="StartContainer for \"131e49f1daa0693f603d35970efe280b5307c9f9f6114e4dcf413d5f4fe4c893\" returns successfully" Feb 13 19:03:18.422481 containerd[1489]: time="2025-02-13T19:03:18.422016808Z" level=info msg="StartContainer for \"5f56c2fa0402c8c73affc6944c59c4e8f557122e6d11559f91ad6be9e0d67690\" returns successfully" Feb 13 19:03:18.499290 kubelet[2687]: E0213 19:03:18.498283 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.508281 kubelet[2687]: E0213 19:03:18.504930 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.529507 kubelet[2687]: I0213 19:03:18.529348 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zfv7j" podStartSLOduration=25.529327403 podStartE2EDuration="25.529327403s" podCreationTimestamp="2025-02-13 19:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:18.513144995 +0000 UTC m=+40.258668692" watchObservedRunningTime="2025-02-13 19:03:18.529327403 +0000 UTC m=+40.274851100" Feb 13 19:03:19.509450 kubelet[2687]: E0213 19:03:19.509230 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:19.509450 kubelet[2687]: E0213 19:03:19.509344 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:19.525096 kubelet[2687]: I0213 19:03:19.525038 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vml9q" podStartSLOduration=26.525019632 podStartE2EDuration="26.525019632s" podCreationTimestamp="2025-02-13 19:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:18.532072352 +0000 UTC m=+40.277596049" watchObservedRunningTime="2025-02-13 19:03:19.525019632 +0000 UTC m=+41.270543329" Feb 13 19:03:20.511133 kubelet[2687]: E0213 19:03:20.510919 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:20.511133 kubelet[2687]: E0213 19:03:20.511066 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:21.422266 systemd[1]: Started sshd@10-10.0.0.40:22-10.0.0.1:37250.service - OpenSSH per-connection server daemon (10.0.0.1:37250). Feb 13 19:03:21.473170 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 37250 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:21.475067 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:21.479653 systemd-logind[1481]: New session 11 of user core. Feb 13 19:03:21.487064 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:03:21.614909 sshd[4152]: Connection closed by 10.0.0.1 port 37250 Feb 13 19:03:21.615714 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:21.625849 systemd[1]: sshd@10-10.0.0.40:22-10.0.0.1:37250.service: Deactivated successfully. Feb 13 19:03:21.627533 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:03:21.628916 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:03:21.644190 systemd[1]: Started sshd@11-10.0.0.40:22-10.0.0.1:37264.service - OpenSSH per-connection server daemon (10.0.0.1:37264). Feb 13 19:03:21.645483 systemd-logind[1481]: Removed session 11. Feb 13 19:03:21.682296 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 37264 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:21.683606 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:21.687979 systemd-logind[1481]: New session 12 of user core. Feb 13 19:03:21.708069 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:03:21.858162 sshd[4168]: Connection closed by 10.0.0.1 port 37264 Feb 13 19:03:21.858394 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:21.872898 systemd[1]: sshd@11-10.0.0.40:22-10.0.0.1:37264.service: Deactivated successfully. Feb 13 19:03:21.874617 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:03:21.878263 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:03:21.891232 systemd[1]: Started sshd@12-10.0.0.40:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Feb 13 19:03:21.892234 systemd-logind[1481]: Removed session 12. Feb 13 19:03:21.937400 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:21.938502 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:21.942737 systemd-logind[1481]: New session 13 of user core. Feb 13 19:03:21.953081 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:03:22.064540 sshd[4182]: Connection closed by 10.0.0.1 port 37268 Feb 13 19:03:22.064906 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:22.068133 systemd[1]: sshd@12-10.0.0.40:22-10.0.0.1:37268.service: Deactivated successfully. Feb 13 19:03:22.069845 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:03:22.071994 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:03:22.073040 systemd-logind[1481]: Removed session 13. Feb 13 19:03:27.080284 systemd[1]: Started sshd@13-10.0.0.40:22-10.0.0.1:47536.service - OpenSSH per-connection server daemon (10.0.0.1:47536). Feb 13 19:03:27.122773 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:27.124925 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:27.130334 systemd-logind[1481]: New session 14 of user core. Feb 13 19:03:27.137053 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:03:27.261010 sshd[4203]: Connection closed by 10.0.0.1 port 47536 Feb 13 19:03:27.261495 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:27.265979 systemd[1]: sshd@13-10.0.0.40:22-10.0.0.1:47536.service: Deactivated successfully. Feb 13 19:03:27.268104 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:03:27.269262 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:03:27.270200 systemd-logind[1481]: Removed session 14. Feb 13 19:03:32.276986 systemd[1]: Started sshd@14-10.0.0.40:22-10.0.0.1:47540.service - OpenSSH per-connection server daemon (10.0.0.1:47540). Feb 13 19:03:32.319294 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 47540 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:32.320672 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:32.324208 systemd-logind[1481]: New session 15 of user core. Feb 13 19:03:32.334037 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:03:32.447088 sshd[4218]: Connection closed by 10.0.0.1 port 47540 Feb 13 19:03:32.447596 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:32.457022 systemd[1]: sshd@14-10.0.0.40:22-10.0.0.1:47540.service: Deactivated successfully. Feb 13 19:03:32.458717 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:03:32.461664 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:03:32.473194 systemd[1]: Started sshd@15-10.0.0.40:22-10.0.0.1:47544.service - OpenSSH per-connection server daemon (10.0.0.1:47544). Feb 13 19:03:32.474923 systemd-logind[1481]: Removed session 15. Feb 13 19:03:32.515913 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 47544 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:32.516622 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:32.521064 systemd-logind[1481]: New session 16 of user core. Feb 13 19:03:32.534069 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:03:32.737018 sshd[4233]: Connection closed by 10.0.0.1 port 47544 Feb 13 19:03:32.737681 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:32.751123 systemd[1]: sshd@15-10.0.0.40:22-10.0.0.1:47544.service: Deactivated successfully. Feb 13 19:03:32.753070 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:03:32.753791 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:03:32.761195 systemd[1]: Started sshd@16-10.0.0.40:22-10.0.0.1:38170.service - OpenSSH per-connection server daemon (10.0.0.1:38170). Feb 13 19:03:32.762448 systemd-logind[1481]: Removed session 16. Feb 13 19:03:32.803006 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 38170 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:32.804456 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:32.808877 systemd-logind[1481]: New session 17 of user core. Feb 13 19:03:32.813020 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:03:34.300672 sshd[4246]: Connection closed by 10.0.0.1 port 38170 Feb 13 19:03:34.301356 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:34.312300 systemd[1]: sshd@16-10.0.0.40:22-10.0.0.1:38170.service: Deactivated successfully. Feb 13 19:03:34.314450 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:03:34.318104 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:03:34.328207 systemd[1]: Started sshd@17-10.0.0.40:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). Feb 13 19:03:34.332131 systemd-logind[1481]: Removed session 17. Feb 13 19:03:34.381564 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:34.382992 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:34.388220 systemd-logind[1481]: New session 18 of user core. Feb 13 19:03:34.396078 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:03:34.641435 sshd[4267]: Connection closed by 10.0.0.1 port 38178 Feb 13 19:03:34.643139 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:34.676246 systemd[1]: Started sshd@18-10.0.0.40:22-10.0.0.1:38186.service - OpenSSH per-connection server daemon (10.0.0.1:38186). Feb 13 19:03:34.676917 systemd[1]: sshd@17-10.0.0.40:22-10.0.0.1:38178.service: Deactivated successfully. Feb 13 19:03:34.678781 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:03:34.684451 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:03:34.685651 systemd-logind[1481]: Removed session 18. Feb 13 19:03:34.719764 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:34.721210 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:34.727220 systemd-logind[1481]: New session 19 of user core. Feb 13 19:03:34.741214 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:03:34.885026 sshd[4280]: Connection closed by 10.0.0.1 port 38186 Feb 13 19:03:34.885624 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:34.890540 systemd[1]: sshd@18-10.0.0.40:22-10.0.0.1:38186.service: Deactivated successfully. Feb 13 19:03:34.892574 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:03:34.894270 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:03:34.895120 systemd-logind[1481]: Removed session 19. Feb 13 19:03:39.897918 systemd[1]: Started sshd@19-10.0.0.40:22-10.0.0.1:38192.service - OpenSSH per-connection server daemon (10.0.0.1:38192). Feb 13 19:03:39.940080 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 38192 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:39.941358 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:39.944913 systemd-logind[1481]: New session 20 of user core. Feb 13 19:03:39.952094 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:03:40.065083 sshd[4301]: Connection closed by 10.0.0.1 port 38192 Feb 13 19:03:40.065847 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:40.069261 systemd[1]: sshd@19-10.0.0.40:22-10.0.0.1:38192.service: Deactivated successfully. Feb 13 19:03:40.071052 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:03:40.071795 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:03:40.072534 systemd-logind[1481]: Removed session 20. Feb 13 19:03:45.091141 systemd[1]: Started sshd@20-10.0.0.40:22-10.0.0.1:44280.service - OpenSSH per-connection server daemon (10.0.0.1:44280). Feb 13 19:03:45.156477 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 44280 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:45.157756 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:45.161420 systemd-logind[1481]: New session 21 of user core. Feb 13 19:03:45.172064 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:03:45.283410 sshd[4316]: Connection closed by 10.0.0.1 port 44280 Feb 13 19:03:45.284125 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:45.287132 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:03:45.290064 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:03:45.290285 systemd[1]: sshd@20-10.0.0.40:22-10.0.0.1:44280.service: Deactivated successfully. Feb 13 19:03:45.292373 systemd-logind[1481]: Removed session 21. Feb 13 19:03:46.349828 kubelet[2687]: E0213 19:03:46.349424 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:50.297367 systemd[1]: Started sshd@21-10.0.0.40:22-10.0.0.1:44294.service - OpenSSH per-connection server daemon (10.0.0.1:44294). Feb 13 19:03:50.346311 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 44294 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:50.347667 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:50.352316 systemd-logind[1481]: New session 22 of user core. Feb 13 19:03:50.362098 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:03:50.477964 sshd[4332]: Connection closed by 10.0.0.1 port 44294 Feb 13 19:03:50.478403 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:50.481982 systemd[1]: sshd@21-10.0.0.40:22-10.0.0.1:44294.service: Deactivated successfully. Feb 13 19:03:50.484095 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:03:50.486019 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:03:50.487925 systemd-logind[1481]: Removed session 22. Feb 13 19:03:55.492397 systemd[1]: Started sshd@22-10.0.0.40:22-10.0.0.1:44426.service - OpenSSH per-connection server daemon (10.0.0.1:44426). Feb 13 19:03:55.543693 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 44426 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:55.545383 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.551101 systemd-logind[1481]: New session 23 of user core. Feb 13 19:03:55.562145 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:03:55.694530 sshd[4351]: Connection closed by 10.0.0.1 port 44426 Feb 13 19:03:55.695011 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.707289 systemd[1]: sshd@22-10.0.0.40:22-10.0.0.1:44426.service: Deactivated successfully. Feb 13 19:03:55.709016 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:03:55.710552 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:03:55.723226 systemd[1]: Started sshd@23-10.0.0.40:22-10.0.0.1:44432.service - OpenSSH per-connection server daemon (10.0.0.1:44432). Feb 13 19:03:55.724738 systemd-logind[1481]: Removed session 23. Feb 13 19:03:55.762440 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:55.763849 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.770569 systemd-logind[1481]: New session 24 of user core. Feb 13 19:03:55.781052 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:03:58.245113 containerd[1489]: time="2025-02-13T19:03:58.245059316Z" level=info msg="StopContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" with timeout 30 (s)" Feb 13 19:03:58.246168 containerd[1489]: time="2025-02-13T19:03:58.246137758Z" level=info msg="Stop container \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" with signal terminated" Feb 13 19:03:58.257462 systemd[1]: cri-containerd-4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9.scope: Deactivated successfully. Feb 13 19:03:58.287195 containerd[1489]: time="2025-02-13T19:03:58.287043749Z" level=info msg="StopContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" with timeout 2 (s)" Feb 13 19:03:58.287579 containerd[1489]: time="2025-02-13T19:03:58.287500990Z" level=info msg="Stop container \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" with signal terminated" Feb 13 19:03:58.289162 containerd[1489]: time="2025-02-13T19:03:58.289103592Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:58.290658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9-rootfs.mount: Deactivated successfully. Feb 13 19:03:58.294340 systemd-networkd[1410]: lxc_health: Link DOWN Feb 13 19:03:58.294347 systemd-networkd[1410]: lxc_health: Lost carrier Feb 13 19:03:58.297525 containerd[1489]: time="2025-02-13T19:03:58.297324127Z" level=info msg="shim disconnected" id=4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9 namespace=k8s.io Feb 13 19:03:58.297749 containerd[1489]: time="2025-02-13T19:03:58.297614887Z" level=warning msg="cleaning up after shim disconnected" id=4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9 namespace=k8s.io Feb 13 19:03:58.297749 containerd[1489]: time="2025-02-13T19:03:58.297629247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.312422 systemd[1]: cri-containerd-730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722.scope: Deactivated successfully. Feb 13 19:03:58.312792 systemd[1]: cri-containerd-730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722.scope: Consumed 7.253s CPU time, 122.5M memory peak, 160K read from disk, 12.9M written to disk. Feb 13 19:03:58.351767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722-rootfs.mount: Deactivated successfully. Feb 13 19:03:58.367663 containerd[1489]: time="2025-02-13T19:03:58.367614889Z" level=info msg="StopContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" returns successfully" Feb 13 19:03:58.368055 containerd[1489]: time="2025-02-13T19:03:58.367819849Z" level=info msg="shim disconnected" id=730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722 namespace=k8s.io Feb 13 19:03:58.368055 containerd[1489]: time="2025-02-13T19:03:58.367855689Z" level=warning msg="cleaning up after shim disconnected" id=730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722 namespace=k8s.io Feb 13 19:03:58.368055 containerd[1489]: time="2025-02-13T19:03:58.367907049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.368767 containerd[1489]: time="2025-02-13T19:03:58.368716450Z" level=info msg="StopPodSandbox for \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\"" Feb 13 19:03:58.372680 containerd[1489]: time="2025-02-13T19:03:58.372608737Z" level=info msg="Container to stop \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.374828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a-shm.mount: Deactivated successfully. Feb 13 19:03:58.379878 systemd[1]: cri-containerd-a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a.scope: Deactivated successfully. Feb 13 19:03:58.394692 containerd[1489]: time="2025-02-13T19:03:58.394636935Z" level=info msg="StopContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" returns successfully" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395388977Z" level=info msg="StopPodSandbox for \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\"" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395428297Z" level=info msg="Container to stop \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395447337Z" level=info msg="Container to stop \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395456057Z" level=info msg="Container to stop \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395464097Z" level=info msg="Container to stop \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.395609 containerd[1489]: time="2025-02-13T19:03:58.395472177Z" level=info msg="Container to stop \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.397808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f-shm.mount: Deactivated successfully. Feb 13 19:03:58.409465 kubelet[2687]: E0213 19:03:58.409389 2687 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:03:58.414791 systemd[1]: cri-containerd-0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f.scope: Deactivated successfully. Feb 13 19:03:58.421517 containerd[1489]: time="2025-02-13T19:03:58.421458742Z" level=info msg="shim disconnected" id=a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a namespace=k8s.io Feb 13 19:03:58.421517 containerd[1489]: time="2025-02-13T19:03:58.421512902Z" level=warning msg="cleaning up after shim disconnected" id=a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a namespace=k8s.io Feb 13 19:03:58.421517 containerd[1489]: time="2025-02-13T19:03:58.421521462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.435031 containerd[1489]: time="2025-02-13T19:03:58.434970645Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:03:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:03:58.436123 containerd[1489]: time="2025-02-13T19:03:58.436095567Z" level=info msg="TearDown network for sandbox \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\" successfully" Feb 13 19:03:58.436123 containerd[1489]: time="2025-02-13T19:03:58.436120367Z" level=info msg="StopPodSandbox for \"a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a\" returns successfully" Feb 13 19:03:58.440721 containerd[1489]: time="2025-02-13T19:03:58.440656295Z" level=info msg="shim disconnected" id=0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f namespace=k8s.io Feb 13 19:03:58.441554 containerd[1489]: time="2025-02-13T19:03:58.441404136Z" level=warning msg="cleaning up after shim disconnected" id=0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f namespace=k8s.io Feb 13 19:03:58.441554 containerd[1489]: time="2025-02-13T19:03:58.441420776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.458997 containerd[1489]: time="2025-02-13T19:03:58.458883967Z" level=info msg="TearDown network for sandbox \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" successfully" Feb 13 19:03:58.458997 containerd[1489]: time="2025-02-13T19:03:58.458919087Z" level=info msg="StopPodSandbox for \"0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f\" returns successfully" Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.568995 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-bpf-maps\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.569035 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d5515-9f41-4e18-98a3-131705548c6b-cilium-config-path\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.569057 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv7xk\" (UniqueName: \"kubernetes.io/projected/e45be814-07d1-456d-a1af-eeb5cc0d14a8-kube-api-access-kv7xk\") pod \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\" (UID: \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\") " Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.569071 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cni-path\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.569087 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-hubble-tls\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569485 kubelet[2687]: I0213 19:03:58.569107 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/709d5515-9f41-4e18-98a3-131705548c6b-clustermesh-secrets\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569121 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-hostproc\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569144 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-cgroup\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569161 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-run\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569176 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-net\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569190 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-xtables-lock\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569733 kubelet[2687]: I0213 19:03:58.569232 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-kernel\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569872 kubelet[2687]: I0213 19:03:58.569252 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-lib-modules\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569872 kubelet[2687]: I0213 19:03:58.569269 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmpk\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569872 kubelet[2687]: I0213 19:03:58.569286 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-etc-cni-netd\") pod \"709d5515-9f41-4e18-98a3-131705548c6b\" (UID: \"709d5515-9f41-4e18-98a3-131705548c6b\") " Feb 13 19:03:58.569872 kubelet[2687]: I0213 19:03:58.569302 2687 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e45be814-07d1-456d-a1af-eeb5cc0d14a8-cilium-config-path\") pod \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\" (UID: \"e45be814-07d1-456d-a1af-eeb5cc0d14a8\") " Feb 13 19:03:58.573045 kubelet[2687]: I0213 19:03:58.572704 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573045 kubelet[2687]: I0213 19:03:58.572771 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573045 kubelet[2687]: I0213 19:03:58.572807 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573045 kubelet[2687]: I0213 19:03:58.572842 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573045 kubelet[2687]: I0213 19:03:58.572873 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573215 kubelet[2687]: I0213 19:03:58.572895 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.573215 kubelet[2687]: I0213 19:03:58.573042 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.574345 kubelet[2687]: I0213 19:03:58.574117 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cni-path" (OuterVolumeSpecName: "cni-path") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.574345 kubelet[2687]: I0213 19:03:58.574239 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-hostproc" (OuterVolumeSpecName: "hostproc") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.574447 kubelet[2687]: I0213 19:03:58.574388 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.576082 kubelet[2687]: I0213 19:03:58.576041 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709d5515-9f41-4e18-98a3-131705548c6b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:03:58.576154 kubelet[2687]: I0213 19:03:58.576109 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709d5515-9f41-4e18-98a3-131705548c6b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:03:58.576783 kubelet[2687]: I0213 19:03:58.576745 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk" (OuterVolumeSpecName: "kube-api-access-gpmpk") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "kube-api-access-gpmpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:58.577173 kubelet[2687]: I0213 19:03:58.577144 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45be814-07d1-456d-a1af-eeb5cc0d14a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e45be814-07d1-456d-a1af-eeb5cc0d14a8" (UID: "e45be814-07d1-456d-a1af-eeb5cc0d14a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:03:58.577734 kubelet[2687]: I0213 19:03:58.577503 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45be814-07d1-456d-a1af-eeb5cc0d14a8-kube-api-access-kv7xk" (OuterVolumeSpecName: "kube-api-access-kv7xk") pod "e45be814-07d1-456d-a1af-eeb5cc0d14a8" (UID: "e45be814-07d1-456d-a1af-eeb5cc0d14a8"). InnerVolumeSpecName "kube-api-access-kv7xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:58.577913 kubelet[2687]: I0213 19:03:58.577887 2687 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "709d5515-9f41-4e18-98a3-131705548c6b" (UID: "709d5515-9f41-4e18-98a3-131705548c6b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:58.596635 kubelet[2687]: I0213 19:03:58.596592 2687 scope.go:117] "RemoveContainer" containerID="730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722" Feb 13 19:03:58.598953 containerd[1489]: time="2025-02-13T19:03:58.598920169Z" level=info msg="RemoveContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\"" Feb 13 19:03:58.604210 systemd[1]: Removed slice kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice - libcontainer container kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice. Feb 13 19:03:58.604319 systemd[1]: kubepods-burstable-pod709d5515_9f41_4e18_98a3_131705548c6b.slice: Consumed 7.454s CPU time, 122.8M memory peak, 168K read from disk, 12.9M written to disk. Feb 13 19:03:58.607044 systemd[1]: Removed slice kubepods-besteffort-pode45be814_07d1_456d_a1af_eeb5cc0d14a8.slice - libcontainer container kubepods-besteffort-pode45be814_07d1_456d_a1af_eeb5cc0d14a8.slice. Feb 13 19:03:58.615732 containerd[1489]: time="2025-02-13T19:03:58.615655838Z" level=info msg="RemoveContainer for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" returns successfully" Feb 13 19:03:58.616059 kubelet[2687]: I0213 19:03:58.616019 2687 scope.go:117] "RemoveContainer" containerID="68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6" Feb 13 19:03:58.618334 containerd[1489]: time="2025-02-13T19:03:58.618275203Z" level=info msg="RemoveContainer for \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\"" Feb 13 19:03:58.624730 containerd[1489]: time="2025-02-13T19:03:58.624520214Z" level=info msg="RemoveContainer for \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\" returns successfully" Feb 13 19:03:58.624828 kubelet[2687]: I0213 19:03:58.624754 2687 scope.go:117] "RemoveContainer" containerID="69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69" Feb 13 19:03:58.627254 containerd[1489]: time="2025-02-13T19:03:58.626972978Z" level=info msg="RemoveContainer for \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\"" Feb 13 19:03:58.630325 containerd[1489]: time="2025-02-13T19:03:58.630090503Z" level=info msg="RemoveContainer for \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\" returns successfully" Feb 13 19:03:58.630691 kubelet[2687]: I0213 19:03:58.630657 2687 scope.go:117] "RemoveContainer" containerID="bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510" Feb 13 19:03:58.633135 containerd[1489]: time="2025-02-13T19:03:58.632880508Z" level=info msg="RemoveContainer for \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\"" Feb 13 19:03:58.635483 containerd[1489]: time="2025-02-13T19:03:58.635449313Z" level=info msg="RemoveContainer for \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\" returns successfully" Feb 13 19:03:58.635801 kubelet[2687]: I0213 19:03:58.635768 2687 scope.go:117] "RemoveContainer" containerID="0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170" Feb 13 19:03:58.637228 containerd[1489]: time="2025-02-13T19:03:58.636975555Z" level=info msg="RemoveContainer for \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\"" Feb 13 19:03:58.639543 containerd[1489]: time="2025-02-13T19:03:58.639508880Z" level=info msg="RemoveContainer for \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\" returns successfully" Feb 13 19:03:58.639976 kubelet[2687]: I0213 19:03:58.639953 2687 scope.go:117] "RemoveContainer" containerID="730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722" Feb 13 19:03:58.640459 containerd[1489]: time="2025-02-13T19:03:58.640335961Z" level=error msg="ContainerStatus for \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\": not found" Feb 13 19:03:58.648389 kubelet[2687]: E0213 19:03:58.648148 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\": not found" containerID="730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722" Feb 13 19:03:58.648389 kubelet[2687]: I0213 19:03:58.648197 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722"} err="failed to get container status \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\": rpc error: code = NotFound desc = an error occurred when try to find container \"730be7a5890a7c5561b432a1c57201ac09114dffee8b361bef345744b5e98722\": not found" Feb 13 19:03:58.648389 kubelet[2687]: I0213 19:03:58.648280 2687 scope.go:117] "RemoveContainer" containerID="68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6" Feb 13 19:03:58.648624 containerd[1489]: time="2025-02-13T19:03:58.648555935Z" level=error msg="ContainerStatus for \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\": not found" Feb 13 19:03:58.648887 kubelet[2687]: E0213 19:03:58.648758 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\": not found" containerID="68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6" Feb 13 19:03:58.648887 kubelet[2687]: I0213 19:03:58.648784 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6"} err="failed to get container status \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"68a93c1014c301ae8d606ca0923fe982f1f960f24a93f89a6faa727c03bed9f6\": not found" Feb 13 19:03:58.648887 kubelet[2687]: I0213 19:03:58.648800 2687 scope.go:117] "RemoveContainer" containerID="69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69" Feb 13 19:03:58.649073 containerd[1489]: time="2025-02-13T19:03:58.649006776Z" level=error msg="ContainerStatus for \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\": not found" Feb 13 19:03:58.649133 kubelet[2687]: E0213 19:03:58.649107 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\": not found" containerID="69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69" Feb 13 19:03:58.649163 kubelet[2687]: I0213 19:03:58.649135 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69"} err="failed to get container status \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\": rpc error: code = NotFound desc = an error occurred when try to find container \"69806a7238abdfa04beae07f585708da1df330135fc0db8b19a857ca849eaa69\": not found" Feb 13 19:03:58.649163 kubelet[2687]: I0213 19:03:58.649158 2687 scope.go:117] "RemoveContainer" containerID="bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510" Feb 13 19:03:58.649378 containerd[1489]: time="2025-02-13T19:03:58.649347217Z" level=error msg="ContainerStatus for \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\": not found" Feb 13 19:03:58.649527 kubelet[2687]: E0213 19:03:58.649494 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\": not found" containerID="bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510" Feb 13 19:03:58.649566 kubelet[2687]: I0213 19:03:58.649521 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510"} err="failed to get container status \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf92e33134786fa63f45ff1cf4fcf68fe91756158e493410f804fd632b53e510\": not found" Feb 13 19:03:58.649566 kubelet[2687]: I0213 19:03:58.649540 2687 scope.go:117] "RemoveContainer" containerID="0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170" Feb 13 19:03:58.649731 containerd[1489]: time="2025-02-13T19:03:58.649698897Z" level=error msg="ContainerStatus for \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\": not found" Feb 13 19:03:58.649830 kubelet[2687]: E0213 19:03:58.649809 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\": not found" containerID="0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170" Feb 13 19:03:58.649884 kubelet[2687]: I0213 19:03:58.649837 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170"} err="failed to get container status \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\": rpc error: code = NotFound desc = an error occurred when try to find container \"0693d61b5a7d71517f5d0fcde8681492f11798709f374acc1fafc54c97d7b170\": not found" Feb 13 19:03:58.649884 kubelet[2687]: I0213 19:03:58.649871 2687 scope.go:117] "RemoveContainer" containerID="4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9" Feb 13 19:03:58.650815 containerd[1489]: time="2025-02-13T19:03:58.650790539Z" level=info msg="RemoveContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\"" Feb 13 19:03:58.653405 containerd[1489]: time="2025-02-13T19:03:58.653364464Z" level=info msg="RemoveContainer for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" returns successfully" Feb 13 19:03:58.653717 kubelet[2687]: I0213 19:03:58.653561 2687 scope.go:117] "RemoveContainer" containerID="4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9" Feb 13 19:03:58.654073 containerd[1489]: time="2025-02-13T19:03:58.653958065Z" level=error msg="ContainerStatus for \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\": not found" Feb 13 19:03:58.654141 kubelet[2687]: E0213 19:03:58.654119 2687 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\": not found" containerID="4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9" Feb 13 19:03:58.654170 kubelet[2687]: I0213 19:03:58.654144 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9"} err="failed to get container status \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f2e5a928aab2eb7f78fb0e0e2e28197601c317425459af6a86f950088f6dbc9\": not found" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670410 2687 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kv7xk\" (UniqueName: \"kubernetes.io/projected/e45be814-07d1-456d-a1af-eeb5cc0d14a8-kube-api-access-kv7xk\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670441 2687 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670451 2687 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d5515-9f41-4e18-98a3-131705548c6b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670459 2687 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670468 2687 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670476 2687 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/709d5515-9f41-4e18-98a3-131705548c6b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670484 2687 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670576 kubelet[2687]: I0213 19:03:58.670492 2687 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670499 2687 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670507 2687 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670514 2687 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670525 2687 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670533 2687 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670541 2687 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gpmpk\" (UniqueName: \"kubernetes.io/projected/709d5515-9f41-4e18-98a3-131705548c6b-kube-api-access-gpmpk\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670548 2687 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/709d5515-9f41-4e18-98a3-131705548c6b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:58.670890 kubelet[2687]: I0213 19:03:58.670556 2687 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e45be814-07d1-456d-a1af-eeb5cc0d14a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:03:59.265157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f820a046e48838d354df1ec4b5b16f44a8ccc4628a37af46d44a07540dcdf3f-rootfs.mount: Deactivated successfully. Feb 13 19:03:59.265256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e1cc9010cf3380bafc5d0a895ff267a4a5cfd0f9d3bb33bf2cf240e42d0e6a-rootfs.mount: Deactivated successfully. Feb 13 19:03:59.265308 systemd[1]: var-lib-kubelet-pods-e45be814\x2d07d1\x2d456d\x2da1af\x2deeb5cc0d14a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkv7xk.mount: Deactivated successfully. Feb 13 19:03:59.265370 systemd[1]: var-lib-kubelet-pods-709d5515\x2d9f41\x2d4e18\x2d98a3\x2d131705548c6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgpmpk.mount: Deactivated successfully. Feb 13 19:03:59.265432 systemd[1]: var-lib-kubelet-pods-709d5515\x2d9f41\x2d4e18\x2d98a3\x2d131705548c6b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:03:59.265479 systemd[1]: var-lib-kubelet-pods-709d5515\x2d9f41\x2d4e18\x2d98a3\x2d131705548c6b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:03:59.441572 kubelet[2687]: I0213 19:03:59.441434 2687 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:03:59Z","lastTransitionTime":"2025-02-13T19:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:04:00.206763 sshd[4366]: Connection closed by 10.0.0.1 port 44432 Feb 13 19:04:00.207125 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:00.220504 systemd[1]: sshd@23-10.0.0.40:22-10.0.0.1:44432.service: Deactivated successfully. Feb 13 19:04:00.222087 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:04:00.222266 systemd[1]: session-24.scope: Consumed 1.769s CPU time, 26.4M memory peak. Feb 13 19:04:00.222783 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:04:00.234122 systemd[1]: Started sshd@24-10.0.0.40:22-10.0.0.1:44434.service - OpenSSH per-connection server daemon (10.0.0.1:44434). Feb 13 19:04:00.235384 systemd-logind[1481]: Removed session 24. Feb 13 19:04:00.272427 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 44434 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:00.273594 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:00.277387 systemd-logind[1481]: New session 25 of user core. Feb 13 19:04:00.285066 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:04:00.350182 kubelet[2687]: E0213 19:04:00.350138 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:00.352559 kubelet[2687]: I0213 19:04:00.352524 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709d5515-9f41-4e18-98a3-131705548c6b" path="/var/lib/kubelet/pods/709d5515-9f41-4e18-98a3-131705548c6b/volumes" Feb 13 19:04:00.353283 kubelet[2687]: I0213 19:04:00.353085 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45be814-07d1-456d-a1af-eeb5cc0d14a8" path="/var/lib/kubelet/pods/e45be814-07d1-456d-a1af-eeb5cc0d14a8/volumes" Feb 13 19:04:01.529169 sshd[4530]: Connection closed by 10.0.0.1 port 44434 Feb 13 19:04:01.530552 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:01.543388 kubelet[2687]: I0213 19:04:01.543275 2687 topology_manager.go:215] "Topology Admit Handler" podUID="33afc71d-e37f-47a3-8d5f-8dc26e535b4f" podNamespace="kube-system" podName="cilium-k8b6d" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543402 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e45be814-07d1-456d-a1af-eeb5cc0d14a8" containerName="cilium-operator" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543412 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="mount-cgroup" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543418 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="cilium-agent" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543424 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="apply-sysctl-overwrites" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543430 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="mount-bpf-fs" Feb 13 19:04:01.546052 kubelet[2687]: E0213 19:04:01.543435 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="clean-cilium-state" Feb 13 19:04:01.546052 kubelet[2687]: I0213 19:04:01.543458 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45be814-07d1-456d-a1af-eeb5cc0d14a8" containerName="cilium-operator" Feb 13 19:04:01.546052 kubelet[2687]: I0213 19:04:01.543465 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="709d5515-9f41-4e18-98a3-131705548c6b" containerName="cilium-agent" Feb 13 19:04:01.543722 systemd[1]: sshd@24-10.0.0.40:22-10.0.0.1:44434.service: Deactivated successfully. Feb 13 19:04:01.549160 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:04:01.549657 systemd[1]: session-25.scope: Consumed 1.151s CPU time, 25.8M memory peak. Feb 13 19:04:01.553749 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:04:01.570173 systemd[1]: Started sshd@25-10.0.0.40:22-10.0.0.1:44444.service - OpenSSH per-connection server daemon (10.0.0.1:44444). Feb 13 19:04:01.572151 systemd-logind[1481]: Removed session 25. Feb 13 19:04:01.578583 systemd[1]: Created slice kubepods-burstable-pod33afc71d_e37f_47a3_8d5f_8dc26e535b4f.slice - libcontainer container kubepods-burstable-pod33afc71d_e37f_47a3_8d5f_8dc26e535b4f.slice. Feb 13 19:04:01.607249 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 44444 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:01.610038 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:01.615140 systemd-logind[1481]: New session 26 of user core. Feb 13 19:04:01.623062 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:04:01.672901 sshd[4544]: Connection closed by 10.0.0.1 port 44444 Feb 13 19:04:01.673249 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:01.683511 systemd[1]: sshd@25-10.0.0.40:22-10.0.0.1:44444.service: Deactivated successfully. Feb 13 19:04:01.684680 kubelet[2687]: I0213 19:04:01.684638 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-host-proc-sys-kernel\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684737 kubelet[2687]: I0213 19:04:01.684688 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-cilium-ipsec-secrets\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684737 kubelet[2687]: I0213 19:04:01.684710 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-cilium-run\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684737 kubelet[2687]: I0213 19:04:01.684726 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-hostproc\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684820 kubelet[2687]: I0213 19:04:01.684743 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-xtables-lock\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684820 kubelet[2687]: I0213 19:04:01.684761 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lh9v\" (UniqueName: \"kubernetes.io/projected/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-kube-api-access-2lh9v\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684820 kubelet[2687]: I0213 19:04:01.684775 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-lib-modules\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684820 kubelet[2687]: I0213 19:04:01.684792 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-clustermesh-secrets\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684820 kubelet[2687]: I0213 19:04:01.684807 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-host-proc-sys-net\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684939 kubelet[2687]: I0213 19:04:01.684823 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-etc-cni-netd\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684939 kubelet[2687]: I0213 19:04:01.684839 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-bpf-maps\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684939 kubelet[2687]: I0213 19:04:01.684887 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-hubble-tls\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.684939 kubelet[2687]: I0213 19:04:01.684914 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-cni-path\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.685018 kubelet[2687]: I0213 19:04:01.684964 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-cilium-cgroup\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.685018 kubelet[2687]: I0213 19:04:01.685002 2687 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33afc71d-e37f-47a3-8d5f-8dc26e535b4f-cilium-config-path\") pod \"cilium-k8b6d\" (UID: \"33afc71d-e37f-47a3-8d5f-8dc26e535b4f\") " pod="kube-system/cilium-k8b6d" Feb 13 19:04:01.686478 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:04:01.687811 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:04:01.702443 systemd[1]: Started sshd@26-10.0.0.40:22-10.0.0.1:44460.service - OpenSSH per-connection server daemon (10.0.0.1:44460). Feb 13 19:04:01.703452 systemd-logind[1481]: Removed session 26. Feb 13 19:04:01.741395 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:01.742643 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:01.747599 systemd-logind[1481]: New session 27 of user core. Feb 13 19:04:01.758109 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:04:01.880903 kubelet[2687]: E0213 19:04:01.880748 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.883161 containerd[1489]: time="2025-02-13T19:04:01.883119319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8b6d,Uid:33afc71d-e37f-47a3-8d5f-8dc26e535b4f,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:01.912951 containerd[1489]: time="2025-02-13T19:04:01.910886737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:01.912951 containerd[1489]: time="2025-02-13T19:04:01.910948897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:01.912951 containerd[1489]: time="2025-02-13T19:04:01.910964097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.912951 containerd[1489]: time="2025-02-13T19:04:01.911056617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.932742 systemd[1]: Started cri-containerd-50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7.scope - libcontainer container 50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7. Feb 13 19:04:01.953473 containerd[1489]: time="2025-02-13T19:04:01.953424866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8b6d,Uid:33afc71d-e37f-47a3-8d5f-8dc26e535b4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\"" Feb 13 19:04:01.954892 kubelet[2687]: E0213 19:04:01.954472 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.956628 containerd[1489]: time="2025-02-13T19:04:01.956597793Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:01.991497 containerd[1489]: time="2025-02-13T19:04:01.991435506Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4\"" Feb 13 19:04:01.996046 containerd[1489]: time="2025-02-13T19:04:01.995990116Z" level=info msg="StartContainer for \"a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4\"" Feb 13 19:04:02.031061 systemd[1]: Started cri-containerd-a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4.scope - libcontainer container a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4. Feb 13 19:04:02.053526 containerd[1489]: time="2025-02-13T19:04:02.053421602Z" level=info msg="StartContainer for \"a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4\" returns successfully" Feb 13 19:04:02.074091 systemd[1]: cri-containerd-a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4.scope: Deactivated successfully. Feb 13 19:04:02.102652 containerd[1489]: time="2025-02-13T19:04:02.102585191Z" level=info msg="shim disconnected" id=a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4 namespace=k8s.io Feb 13 19:04:02.102989 containerd[1489]: time="2025-02-13T19:04:02.102707191Z" level=warning msg="cleaning up after shim disconnected" id=a6ba88e889d1ce5d6921114a2cf6f4da46472fa826cf095112c28ecc72ad36e4 namespace=k8s.io Feb 13 19:04:02.102989 containerd[1489]: time="2025-02-13T19:04:02.102718431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:02.608513 kubelet[2687]: E0213 19:04:02.608487 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:02.611120 containerd[1489]: time="2025-02-13T19:04:02.611078397Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:02.622533 containerd[1489]: time="2025-02-13T19:04:02.622464222Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d\"" Feb 13 19:04:02.624477 containerd[1489]: time="2025-02-13T19:04:02.624271066Z" level=info msg="StartContainer for \"097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d\"" Feb 13 19:04:02.653036 systemd[1]: Started cri-containerd-097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d.scope - libcontainer container 097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d. Feb 13 19:04:02.677007 containerd[1489]: time="2025-02-13T19:04:02.676940663Z" level=info msg="StartContainer for \"097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d\" returns successfully" Feb 13 19:04:02.683979 systemd[1]: cri-containerd-097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d.scope: Deactivated successfully. Feb 13 19:04:02.703236 containerd[1489]: time="2025-02-13T19:04:02.703176321Z" level=info msg="shim disconnected" id=097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d namespace=k8s.io Feb 13 19:04:02.703236 containerd[1489]: time="2025-02-13T19:04:02.703224681Z" level=warning msg="cleaning up after shim disconnected" id=097a386dfce762fa949fc0a220510722828dd2b3775f2f26e6672de797c5973d namespace=k8s.io Feb 13 19:04:02.703236 containerd[1489]: time="2025-02-13T19:04:02.703232761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:03.410769 kubelet[2687]: E0213 19:04:03.410723 2687 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:03.612052 kubelet[2687]: E0213 19:04:03.611878 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:03.614906 containerd[1489]: time="2025-02-13T19:04:03.614789488Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:03.629538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838533095.mount: Deactivated successfully. Feb 13 19:04:03.630373 containerd[1489]: time="2025-02-13T19:04:03.629516482Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba\"" Feb 13 19:04:03.630373 containerd[1489]: time="2025-02-13T19:04:03.630563564Z" level=info msg="StartContainer for \"aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba\"" Feb 13 19:04:03.663077 systemd[1]: Started cri-containerd-aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba.scope - libcontainer container aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba. Feb 13 19:04:03.689079 containerd[1489]: time="2025-02-13T19:04:03.688953140Z" level=info msg="StartContainer for \"aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba\" returns successfully" Feb 13 19:04:03.692140 systemd[1]: cri-containerd-aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba.scope: Deactivated successfully. Feb 13 19:04:03.723332 containerd[1489]: time="2025-02-13T19:04:03.723272860Z" level=info msg="shim disconnected" id=aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba namespace=k8s.io Feb 13 19:04:03.723332 containerd[1489]: time="2025-02-13T19:04:03.723329180Z" level=warning msg="cleaning up after shim disconnected" id=aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba namespace=k8s.io Feb 13 19:04:03.723332 containerd[1489]: time="2025-02-13T19:04:03.723339340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:03.790003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa3c52c75d047d44fe6166c8a0bc5bad8fe86f7665522034eca9fc02ef577dba-rootfs.mount: Deactivated successfully. Feb 13 19:04:04.620294 kubelet[2687]: E0213 19:04:04.620251 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:04.629689 containerd[1489]: time="2025-02-13T19:04:04.629552915Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:04.655015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774364684.mount: Deactivated successfully. Feb 13 19:04:04.657716 containerd[1489]: time="2025-02-13T19:04:04.657651823Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c\"" Feb 13 19:04:04.659452 containerd[1489]: time="2025-02-13T19:04:04.658555065Z" level=info msg="StartContainer for \"f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c\"" Feb 13 19:04:04.693091 systemd[1]: Started cri-containerd-f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c.scope - libcontainer container f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c. Feb 13 19:04:04.719763 systemd[1]: cri-containerd-f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c.scope: Deactivated successfully. Feb 13 19:04:04.722019 containerd[1489]: time="2025-02-13T19:04:04.721966580Z" level=info msg="StartContainer for \"f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c\" returns successfully" Feb 13 19:04:04.744284 containerd[1489]: time="2025-02-13T19:04:04.744080113Z" level=info msg="shim disconnected" id=f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c namespace=k8s.io Feb 13 19:04:04.744284 containerd[1489]: time="2025-02-13T19:04:04.744134913Z" level=warning msg="cleaning up after shim disconnected" id=f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c namespace=k8s.io Feb 13 19:04:04.744284 containerd[1489]: time="2025-02-13T19:04:04.744142873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:04.789992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57dc425313c93e84f79451bb7c84d5c3d5d1a9599a4213865daf6dcd18d750c-rootfs.mount: Deactivated successfully. Feb 13 19:04:05.349330 kubelet[2687]: E0213 19:04:05.349280 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:05.622429 kubelet[2687]: E0213 19:04:05.622042 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:05.628654 containerd[1489]: time="2025-02-13T19:04:05.628614571Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:05.664202 containerd[1489]: time="2025-02-13T19:04:05.664145181Z" level=info msg="CreateContainer within sandbox \"50838faf5e08f9d2e5a0c819f2f712a3435a39645d7558eb7bc5f29e124831f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c0cfd0cf7dde46bff63e20e7d7c2897e7182de132c7e3483910e68f672365f8\"" Feb 13 19:04:05.664846 containerd[1489]: time="2025-02-13T19:04:05.664812862Z" level=info msg="StartContainer for \"4c0cfd0cf7dde46bff63e20e7d7c2897e7182de132c7e3483910e68f672365f8\"" Feb 13 19:04:05.704031 systemd[1]: Started cri-containerd-4c0cfd0cf7dde46bff63e20e7d7c2897e7182de132c7e3483910e68f672365f8.scope - libcontainer container 4c0cfd0cf7dde46bff63e20e7d7c2897e7182de132c7e3483910e68f672365f8. Feb 13 19:04:05.731039 containerd[1489]: time="2025-02-13T19:04:05.730990430Z" level=info msg="StartContainer for \"4c0cfd0cf7dde46bff63e20e7d7c2897e7182de132c7e3483910e68f672365f8\" returns successfully" Feb 13 19:04:06.032884 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:06.626190 kubelet[2687]: E0213 19:04:06.626163 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:06.639147 kubelet[2687]: I0213 19:04:06.639082 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k8b6d" podStartSLOduration=5.639060319 podStartE2EDuration="5.639060319s" podCreationTimestamp="2025-02-13 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:06.639016518 +0000 UTC m=+88.384540215" watchObservedRunningTime="2025-02-13 19:04:06.639060319 +0000 UTC m=+88.384584016" Feb 13 19:04:07.882666 kubelet[2687]: E0213 19:04:07.882633 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:08.921289 systemd-networkd[1410]: lxc_health: Link UP Feb 13 19:04:08.923976 systemd-networkd[1410]: lxc_health: Gained carrier Feb 13 19:04:09.884831 kubelet[2687]: E0213 19:04:09.884767 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:09.967997 systemd-networkd[1410]: lxc_health: Gained IPv6LL Feb 13 19:04:10.635424 kubelet[2687]: E0213 19:04:10.635366 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:14.632986 sshd[4553]: Connection closed by 10.0.0.1 port 44460 Feb 13 19:04:14.633965 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:14.638428 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:04:14.639392 systemd[1]: sshd@26-10.0.0.40:22-10.0.0.1:44460.service: Deactivated successfully. Feb 13 19:04:14.642750 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:04:14.643919 systemd-logind[1481]: Removed session 27. Feb 13 19:04:15.349548 kubelet[2687]: E0213 19:04:15.349501 2687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"