Feb 13 19:29:45.936258 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:29:45.936281 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:29:45.936292 kernel: KASLR enabled Feb 13 19:29:45.936298 kernel: efi: EFI v2.7 by EDK II Feb 13 19:29:45.936304 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:29:45.936311 kernel: random: crng init done Feb 13 19:29:45.936318 kernel: ACPI: Early table checksum verification disabled Feb 13 19:29:45.936324 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:29:45.936331 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:29:45.936339 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936345 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936352 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936358 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936365 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936372 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936381 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936388 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936395 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:45.936402 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:29:45.936408 kernel: NUMA: Failed to initialise from firmware Feb 13 19:29:45.936415 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:29:45.936422 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:29:45.936429 kernel: Zone ranges: Feb 13 19:29:45.936436 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:29:45.936442 kernel: DMA32 empty Feb 13 19:29:45.936451 kernel: Normal empty Feb 13 19:29:45.936457 kernel: Movable zone start for each node Feb 13 19:29:45.936464 kernel: Early memory node ranges Feb 13 19:29:45.936471 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:29:45.936478 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:29:45.936485 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:29:45.936492 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:29:45.936499 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:29:45.936506 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:29:45.936513 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:29:45.936520 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:29:45.936527 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:29:45.936535 kernel: psci: probing for conduit method from ACPI. Feb 13 19:29:45.936542 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:29:45.936549 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:29:45.936559 kernel: psci: Trusted OS migration not required Feb 13 19:29:45.936566 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:29:45.936574 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:29:45.936583 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:29:45.936591 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:29:45.936598 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:29:45.936606 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:29:45.936613 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:29:45.936620 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:29:45.936627 kernel: CPU features: detected: Spectre-v4 Feb 13 19:29:45.936635 kernel: CPU features: detected: Spectre-BHB Feb 13 19:29:45.936642 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:29:45.936649 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:29:45.936658 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:29:45.936666 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:29:45.936673 kernel: alternatives: applying boot alternatives Feb 13 19:29:45.936681 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:29:45.936689 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:29:45.936696 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:29:45.936704 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:29:45.936711 kernel: Fallback order for Node 0: 0 Feb 13 19:29:45.936718 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:29:45.936725 kernel: Policy zone: DMA Feb 13 19:29:45.936733 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:29:45.936742 kernel: software IO TLB: area num 4. Feb 13 19:29:45.936758 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:29:45.936766 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 19:29:45.936773 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:29:45.936781 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:29:45.936789 kernel: rcu: RCU event tracing is enabled. Feb 13 19:29:45.936796 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:29:45.936804 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:29:45.936811 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:29:45.936819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:29:45.936826 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:29:45.936834 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:29:45.936843 kernel: GICv3: 256 SPIs implemented Feb 13 19:29:45.936851 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:29:45.936858 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:29:45.936866 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:29:45.936873 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:29:45.936880 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:29:45.936888 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:29:45.936980 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:29:45.936988 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:29:45.936995 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:29:45.937003 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:29:45.937013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:29:45.937020 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:29:45.937027 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:29:45.937035 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:29:45.937042 kernel: arm-pv: using stolen time PV Feb 13 19:29:45.937050 kernel: Console: colour dummy device 80x25 Feb 13 19:29:45.937057 kernel: ACPI: Core revision 20230628 Feb 13 19:29:45.937065 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:29:45.937073 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:29:45.937080 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:29:45.937088 kernel: landlock: Up and running. Feb 13 19:29:45.937095 kernel: SELinux: Initializing. Feb 13 19:29:45.937102 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:29:45.937109 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:29:45.937116 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:29:45.937123 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:29:45.937131 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:29:45.937138 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:29:45.937145 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:29:45.937154 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:29:45.937161 kernel: Remapping and enabling EFI services. Feb 13 19:29:45.937168 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:29:45.937175 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:29:45.937183 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:29:45.937190 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:29:45.937198 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:29:45.937205 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:29:45.937212 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:29:45.937219 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:29:45.937228 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:29:45.937236 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:29:45.937248 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:29:45.937258 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:29:45.937265 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:29:45.937273 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:29:45.937281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:29:45.937288 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:29:45.937296 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:29:45.937306 kernel: SMP: Total of 4 processors activated. Feb 13 19:29:45.937313 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:29:45.937321 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:29:45.937328 kernel: CPU features: detected: Common not Private translations Feb 13 19:29:45.937336 kernel: CPU features: detected: CRC32 instructions Feb 13 19:29:45.937344 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:29:45.937351 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:29:45.937359 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:29:45.937368 kernel: CPU features: detected: Privileged Access Never Feb 13 19:29:45.937375 kernel: CPU features: detected: RAS Extension Support Feb 13 19:29:45.937383 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:29:45.937390 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:29:45.937398 kernel: alternatives: applying system-wide alternatives Feb 13 19:29:45.937406 kernel: devtmpfs: initialized Feb 13 19:29:45.937413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:29:45.937421 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:29:45.937429 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:29:45.937438 kernel: SMBIOS 3.0.0 present. Feb 13 19:29:45.937446 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:29:45.937453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:29:45.937461 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:29:45.937469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:29:45.937476 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:29:45.937484 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:29:45.937492 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 19:29:45.937500 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:29:45.937509 kernel: cpuidle: using governor menu Feb 13 19:29:45.937517 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:29:45.937524 kernel: ASID allocator initialised with 32768 entries Feb 13 19:29:45.937532 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:29:45.937539 kernel: Serial: AMBA PL011 UART driver Feb 13 19:29:45.937547 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:29:45.937555 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:29:45.937563 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:29:45.937570 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:29:45.937579 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:29:45.937587 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:29:45.937595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:29:45.937602 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:29:45.937610 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:29:45.937617 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:29:45.937625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:29:45.937632 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:29:45.937640 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:29:45.937649 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:29:45.937656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:29:45.937664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:29:45.937672 kernel: ACPI: Interpreter enabled Feb 13 19:29:45.937679 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:29:45.937687 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:29:45.937694 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:29:45.937702 kernel: printk: console [ttyAMA0] enabled Feb 13 19:29:45.937710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:29:45.937863 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:29:45.937953 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:29:45.938024 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:29:45.938092 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:29:45.938165 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:29:45.938181 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:29:45.938194 kernel: PCI host bridge to bus 0000:00 Feb 13 19:29:45.938283 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:29:45.938346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:29:45.938407 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:29:45.938467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:29:45.938549 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:29:45.938630 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:29:45.938705 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:29:45.938785 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:29:45.938858 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:29:45.938936 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:29:45.939004 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:29:45.939072 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:29:45.939133 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:29:45.939196 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:29:45.939255 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:29:45.939266 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:29:45.939274 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:29:45.939281 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:29:45.939289 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:29:45.939296 kernel: iommu: Default domain type: Translated Feb 13 19:29:45.939304 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:29:45.939311 kernel: efivars: Registered efivars operations Feb 13 19:29:45.939320 kernel: vgaarb: loaded Feb 13 19:29:45.939328 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:29:45.939335 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:29:45.939343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:29:45.939350 kernel: pnp: PnP ACPI init Feb 13 19:29:45.939428 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:29:45.939439 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:29:45.939447 kernel: NET: Registered PF_INET protocol family Feb 13 19:29:45.939457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:29:45.939465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:29:45.939473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:29:45.939481 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:29:45.939489 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:29:45.939496 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:29:45.939504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:29:45.939511 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:29:45.939519 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:29:45.939528 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:29:45.939536 kernel: kvm [1]: HYP mode not available Feb 13 19:29:45.939543 kernel: Initialise system trusted keyrings Feb 13 19:29:45.939551 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:29:45.939558 kernel: Key type asymmetric registered Feb 13 19:29:45.939565 kernel: Asymmetric key parser 'x509' registered Feb 13 19:29:45.939573 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:29:45.939580 kernel: io scheduler mq-deadline registered Feb 13 19:29:45.939587 kernel: io scheduler kyber registered Feb 13 19:29:45.939596 kernel: io scheduler bfq registered Feb 13 19:29:45.939604 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:29:45.939612 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:29:45.939620 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:29:45.939709 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:29:45.939720 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:29:45.939727 kernel: thunder_xcv, ver 1.0 Feb 13 19:29:45.939735 kernel: thunder_bgx, ver 1.0 Feb 13 19:29:45.939743 kernel: nicpf, ver 1.0 Feb 13 19:29:45.939758 kernel: nicvf, ver 1.0 Feb 13 19:29:45.939836 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:29:45.939928 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:29:45 UTC (1739474985) Feb 13 19:29:45.939940 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:29:45.939948 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:29:45.939955 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:29:45.939963 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:29:45.939970 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:29:45.939981 kernel: Segment Routing with IPv6 Feb 13 19:29:45.939989 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:29:45.939996 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:29:45.940003 kernel: Key type dns_resolver registered Feb 13 19:29:45.940011 kernel: registered taskstats version 1 Feb 13 19:29:45.940018 kernel: Loading compiled-in X.509 certificates Feb 13 19:29:45.940026 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:29:45.940033 kernel: Key type .fscrypt registered Feb 13 19:29:45.940040 kernel: Key type fscrypt-provisioning registered Feb 13 19:29:45.940050 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:29:45.940057 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:29:45.940065 kernel: ima: No architecture policies found Feb 13 19:29:45.940072 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:29:45.940080 kernel: clk: Disabling unused clocks Feb 13 19:29:45.940087 kernel: Freeing unused kernel memory: 39360K Feb 13 19:29:45.940094 kernel: Run /init as init process Feb 13 19:29:45.940102 kernel: with arguments: Feb 13 19:29:45.940109 kernel: /init Feb 13 19:29:45.940117 kernel: with environment: Feb 13 19:29:45.940125 kernel: HOME=/ Feb 13 19:29:45.940132 kernel: TERM=linux Feb 13 19:29:45.940139 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:29:45.940149 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:29:45.940159 systemd[1]: Detected virtualization kvm. Feb 13 19:29:45.940168 systemd[1]: Detected architecture arm64. Feb 13 19:29:45.940175 systemd[1]: Running in initrd. Feb 13 19:29:45.940185 systemd[1]: No hostname configured, using default hostname. Feb 13 19:29:45.940192 systemd[1]: Hostname set to . Feb 13 19:29:45.940200 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:29:45.940208 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:29:45.940216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:45.940225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:45.940237 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:29:45.940248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:45.940258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:29:45.940266 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:29:45.940276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:29:45.940284 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:29:45.940292 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:45.940300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:45.940310 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:45.940318 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:45.940326 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:45.940334 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:45.940342 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:45.940350 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:45.940358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:29:45.940366 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:29:45.940375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:45.940384 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:45.940392 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:45.940400 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:45.940408 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:29:45.940416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:45.940424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:29:45.940433 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:29:45.940441 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:45.940449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:45.940459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:45.940467 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:45.940475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:45.940483 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:29:45.940491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:45.940502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:45.940510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:45.940536 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:29:45.940558 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:45.940567 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:29:45.940574 kernel: Bridge firewalling registered Feb 13 19:29:45.940582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:45.940591 systemd-journald[237]: Journal started Feb 13 19:29:45.940610 systemd-journald[237]: Runtime Journal (/run/log/journal/b714e83d2a96491cbb37fb83b59fb4be) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:29:45.916115 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 19:29:45.940193 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 19:29:45.944505 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:45.944878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:45.949411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:45.952520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:45.953536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:45.962768 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:45.974103 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:29:45.975038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:45.976357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:45.979436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:45.984997 dracut-cmdline[274]: dracut-dracut-053 Feb 13 19:29:45.989762 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:29:46.019383 systemd-resolved[281]: Positive Trust Anchors: Feb 13 19:29:46.019401 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:46.019434 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:46.025454 systemd-resolved[281]: Defaulting to hostname 'linux'. Feb 13 19:29:46.026535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:46.027457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:46.060924 kernel: SCSI subsystem initialized Feb 13 19:29:46.064908 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:29:46.072920 kernel: iscsi: registered transport (tcp) Feb 13 19:29:46.085144 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:29:46.085175 kernel: QLogic iSCSI HBA Driver Feb 13 19:29:46.129798 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:46.134086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:29:46.152199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:29:46.152277 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:29:46.152291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:29:46.199920 kernel: raid6: neonx8 gen() 15763 MB/s Feb 13 19:29:46.216908 kernel: raid6: neonx4 gen() 15659 MB/s Feb 13 19:29:46.233903 kernel: raid6: neonx2 gen() 13250 MB/s Feb 13 19:29:46.250904 kernel: raid6: neonx1 gen() 10495 MB/s Feb 13 19:29:46.267914 kernel: raid6: int64x8 gen() 6956 MB/s Feb 13 19:29:46.284906 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 19:29:46.301912 kernel: raid6: int64x2 gen() 6131 MB/s Feb 13 19:29:46.318912 kernel: raid6: int64x1 gen() 5050 MB/s Feb 13 19:29:46.318937 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Feb 13 19:29:46.335916 kernel: raid6: .... xor() 11935 MB/s, rmw enabled Feb 13 19:29:46.335929 kernel: raid6: using neon recovery algorithm Feb 13 19:29:46.340910 kernel: xor: measuring software checksum speed Feb 13 19:29:46.340924 kernel: 8regs : 19797 MB/sec Feb 13 19:29:46.342262 kernel: 32regs : 18202 MB/sec Feb 13 19:29:46.342274 kernel: arm64_neon : 27043 MB/sec Feb 13 19:29:46.342283 kernel: xor: using function: arm64_neon (27043 MB/sec) Feb 13 19:29:46.393917 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:29:46.405456 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:46.419120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:46.432023 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:29:46.435156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:46.437504 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:29:46.453166 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:29:46.481210 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:46.491106 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:46.531612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:46.539123 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:29:46.550929 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:46.553444 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:46.554600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:46.556351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:46.565112 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:29:46.572788 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:29:46.582070 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:29:46.582184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:29:46.582196 kernel: GPT:9289727 != 19775487 Feb 13 19:29:46.582214 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:29:46.582225 kernel: GPT:9289727 != 19775487 Feb 13 19:29:46.582236 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:29:46.582246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:46.577552 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:46.582911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:46.583041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:46.584107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:46.584955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:46.585098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:46.586957 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:46.594142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:46.603134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:29:46.606179 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) Feb 13 19:29:46.608563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:46.610929 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) Feb 13 19:29:46.617043 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:29:46.623524 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:29:46.624475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:29:46.629695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:29:46.650119 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:29:46.651758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:46.656472 disk-uuid[551]: Primary Header is updated. Feb 13 19:29:46.656472 disk-uuid[551]: Secondary Entries is updated. Feb 13 19:29:46.656472 disk-uuid[551]: Secondary Header is updated. Feb 13 19:29:46.658910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:46.689331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:47.678915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:47.679357 disk-uuid[552]: The operation has completed successfully. Feb 13 19:29:47.702814 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:29:47.702970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:29:47.729093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:29:47.732163 sh[570]: Success Feb 13 19:29:47.743928 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:29:47.773428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:29:47.782335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:29:47.784962 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:29:47.794575 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:29:47.794639 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:29:47.794661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:29:47.794691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:29:47.795903 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:29:47.798814 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:29:47.800058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:29:47.807106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:29:47.808469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:29:47.817497 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:29:47.817549 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:29:47.818050 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:47.820912 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:47.828926 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:29:47.829220 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:29:47.836451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:29:47.844125 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:29:47.906321 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:47.920082 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:47.937337 ignition[661]: Ignition 2.19.0 Feb 13 19:29:47.937351 ignition[661]: Stage: fetch-offline Feb 13 19:29:47.937387 ignition[661]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:47.937395 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:47.937568 ignition[661]: parsed url from cmdline: "" Feb 13 19:29:47.937571 ignition[661]: no config URL provided Feb 13 19:29:47.937576 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:29:47.937583 ignition[661]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:29:47.937607 ignition[661]: op(1): [started] loading QEMU firmware config module Feb 13 19:29:47.937613 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:29:47.946445 systemd-networkd[761]: lo: Link UP Feb 13 19:29:47.946458 systemd-networkd[761]: lo: Gained carrier Feb 13 19:29:47.947196 systemd-networkd[761]: Enumeration completed Feb 13 19:29:47.947724 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:47.947728 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:47.948763 systemd-networkd[761]: eth0: Link UP Feb 13 19:29:47.948766 systemd-networkd[761]: eth0: Gained carrier Feb 13 19:29:47.948773 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:47.950487 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:47.951624 systemd[1]: Reached target network.target - Network. Feb 13 19:29:47.954825 ignition[661]: op(1): [finished] loading QEMU firmware config module Feb 13 19:29:47.966932 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:29:47.992479 ignition[661]: parsing config with SHA512: 16e7d5d65a862a0659e796baf3c673d69e8be831f146d1bdf20ef4874fdc27fb1c21fd9783c06831590c9e00d4f166c71caad59e9a541e5d87a0d3a4c0511141 Feb 13 19:29:47.996812 unknown[661]: fetched base config from "system" Feb 13 19:29:47.996824 unknown[661]: fetched user config from "qemu" Feb 13 19:29:47.998271 ignition[661]: fetch-offline: fetch-offline passed Feb 13 19:29:47.998379 ignition[661]: Ignition finished successfully Feb 13 19:29:47.999500 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:48.000806 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:29:48.007148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:29:48.017711 ignition[769]: Ignition 2.19.0 Feb 13 19:29:48.017722 ignition[769]: Stage: kargs Feb 13 19:29:48.017987 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:48.017997 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:48.018975 ignition[769]: kargs: kargs passed Feb 13 19:29:48.019024 ignition[769]: Ignition finished successfully Feb 13 19:29:48.021888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:29:48.024139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:29:48.037958 ignition[778]: Ignition 2.19.0 Feb 13 19:29:48.037967 ignition[778]: Stage: disks Feb 13 19:29:48.038149 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:48.038158 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:48.041528 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:29:48.039128 ignition[778]: disks: disks passed Feb 13 19:29:48.042717 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:48.039177 ignition[778]: Ignition finished successfully Feb 13 19:29:48.043586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:29:48.044968 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:48.045968 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:48.047376 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:48.058088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:29:48.067923 systemd-resolved[281]: Detected conflict on linux IN A 10.0.0.31 Feb 13 19:29:48.067939 systemd-resolved[281]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 13 19:29:48.070019 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:29:48.073956 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:29:48.079030 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:29:48.125916 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:29:48.126594 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:29:48.127695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:29:48.137996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:48.139549 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:29:48.140592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:29:48.140663 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:29:48.140717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:48.149497 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Feb 13 19:29:48.149521 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:29:48.149537 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:29:48.149548 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:48.149558 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:48.146672 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:29:48.150964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:29:48.152646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:48.195695 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:29:48.199890 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:29:48.203256 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:29:48.207054 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:29:48.281987 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:48.295005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:29:48.296459 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:29:48.302076 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:29:48.318875 ignition[910]: INFO : Ignition 2.19.0 Feb 13 19:29:48.318875 ignition[910]: INFO : Stage: mount Feb 13 19:29:48.320090 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:48.320090 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:48.319930 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:29:48.323665 ignition[910]: INFO : mount: mount passed Feb 13 19:29:48.323665 ignition[910]: INFO : Ignition finished successfully Feb 13 19:29:48.321857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:29:48.332016 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:29:48.793189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:29:48.804097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:48.810260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Feb 13 19:29:48.810295 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:29:48.810316 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:29:48.810910 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:48.813928 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:48.814350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:48.830688 ignition[941]: INFO : Ignition 2.19.0 Feb 13 19:29:48.830688 ignition[941]: INFO : Stage: files Feb 13 19:29:48.831950 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:48.831950 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:48.831950 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:29:48.834422 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:29:48.834422 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:29:48.834422 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:29:48.837394 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:29:48.837394 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:29:48.837394 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:29:48.837394 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:29:48.837394 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:29:48.837394 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:29:48.834877 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 19:29:48.888731 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:29:49.114672 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:29:49.114672 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:29:49.117663 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:29:49.439063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 19:29:49.499143 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:29:49.500576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:29:49.731085 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 19:29:49.935165 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:29:49.935165 ignition[941]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 13 19:29:49.937850 ignition[941]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:29:49.943022 systemd-networkd[761]: eth0: Gained IPv6LL Feb 13 19:29:49.960098 ignition[941]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:29:49.963856 ignition[941]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:29:49.966007 ignition[941]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:29:49.966007 ignition[941]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:49.966007 ignition[941]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:49.966007 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:49.966007 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:49.966007 ignition[941]: INFO : files: files passed Feb 13 19:29:49.966007 ignition[941]: INFO : Ignition finished successfully Feb 13 19:29:49.966382 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:29:49.975033 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:29:49.976546 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:29:49.979748 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:29:49.979826 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:29:49.984112 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:29:49.987256 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:49.987256 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:49.989580 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:49.990863 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:49.991880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:29:50.003028 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:29:50.023653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:29:50.023804 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:29:50.025558 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:29:50.026368 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:29:50.027880 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:29:50.036046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:29:50.047762 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:50.049872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:29:50.060879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:50.061801 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:50.063348 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:29:50.064720 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:29:50.064839 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:50.066755 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:29:50.068291 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:29:50.069553 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:29:50.070817 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:50.072313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:50.073818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:29:50.075202 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:50.076750 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:29:50.078196 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:29:50.079448 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:29:50.080562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:29:50.080685 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:50.082539 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:50.084023 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:50.085487 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:29:50.085589 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:50.087047 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:29:50.087171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:50.089258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:29:50.089381 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:50.090906 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:29:50.092056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:29:50.092955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:50.094468 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:29:50.095655 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:29:50.097022 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:29:50.097122 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:50.098687 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:29:50.098796 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:50.099952 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:29:50.100064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:50.101416 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:29:50.101515 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:29:50.116090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:29:50.117561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:29:50.118239 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:29:50.118354 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:50.119779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:29:50.119873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:50.124481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:29:50.124580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:29:50.130233 ignition[996]: INFO : Ignition 2.19.0 Feb 13 19:29:50.130233 ignition[996]: INFO : Stage: umount Feb 13 19:29:50.131664 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:50.131664 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:50.133663 ignition[996]: INFO : umount: umount passed Feb 13 19:29:50.133663 ignition[996]: INFO : Ignition finished successfully Feb 13 19:29:50.132056 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:29:50.135207 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:29:50.135310 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:29:50.136545 systemd[1]: Stopped target network.target - Network. Feb 13 19:29:50.137562 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:29:50.137618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:29:50.138881 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:29:50.138932 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:29:50.140055 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:29:50.140094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:29:50.142228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:29:50.142280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:50.143478 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:29:50.144629 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:29:50.149703 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:29:50.149810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:29:50.152317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:29:50.152369 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:50.153488 systemd-networkd[761]: eth0: DHCPv6 lease lost Feb 13 19:29:50.155553 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:29:50.155691 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:29:50.157552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:29:50.157583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:50.165033 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:29:50.165689 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:29:50.165758 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:50.167265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:29:50.167302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:50.168707 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:29:50.168757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:50.170502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:50.179374 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:29:50.179488 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:29:50.187711 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:29:50.187868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:50.189596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:29:50.189635 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:50.190955 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:29:50.190985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:50.192522 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:29:50.192568 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:50.194578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:29:50.194619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:50.196557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:50.196595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:50.212068 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:29:50.212838 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:29:50.212913 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:50.214581 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:29:50.214626 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:50.216087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:29:50.216124 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:50.217704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:50.217750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:50.219457 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:29:50.219540 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:29:50.221816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:29:50.221917 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:29:50.223936 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:29:50.224752 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:29:50.224812 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:50.226371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:29:50.235744 systemd[1]: Switching root. Feb 13 19:29:50.264516 systemd-journald[237]: Journal stopped Feb 13 19:29:50.982825 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:29:50.982882 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:29:50.982907 kernel: SELinux: policy capability open_perms=1 Feb 13 19:29:50.982918 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:29:50.982927 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:29:50.982938 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:29:50.982948 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:29:50.982957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:29:50.982967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:29:50.982976 kernel: audit: type=1403 audit(1739474990.456:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:29:50.982987 systemd[1]: Successfully loaded SELinux policy in 32.785ms. Feb 13 19:29:50.983013 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.222ms. Feb 13 19:29:50.983025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:29:50.983037 systemd[1]: Detected virtualization kvm. Feb 13 19:29:50.983048 systemd[1]: Detected architecture arm64. Feb 13 19:29:50.983059 systemd[1]: Detected first boot. Feb 13 19:29:50.983069 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:29:50.983086 zram_generator::config[1062]: No configuration found. Feb 13 19:29:50.983098 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:29:50.983108 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:29:50.983119 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:29:50.983129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:29:50.983142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:29:50.983153 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:29:50.983163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:29:50.983173 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:29:50.983184 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:29:50.983194 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:29:50.983204 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:29:50.983215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:50.983226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:50.983238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:29:50.983250 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:29:50.983261 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:29:50.983272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:50.983288 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:29:50.983299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:50.983309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:29:50.983319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:50.983330 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:50.983342 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:50.983353 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:50.983363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:29:50.983374 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:29:50.983384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:29:50.983395 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:29:50.983405 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:50.983416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:50.983427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:50.983438 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:29:50.983449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:29:50.983459 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:29:50.983469 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:29:50.983480 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:29:50.983491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:29:50.983502 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:29:50.983512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:29:50.983524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:50.983535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:50.983545 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:29:50.983556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:50.983566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:50.983577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:50.983587 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:29:50.983597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:50.983609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:29:50.983624 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:29:50.983634 kernel: fuse: init (API version 7.39) Feb 13 19:29:50.983644 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:29:50.983655 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:50.983665 kernel: ACPI: bus type drm_connector registered Feb 13 19:29:50.983675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:50.983685 kernel: loop: module loaded Feb 13 19:29:50.983695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:29:50.983708 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:29:50.983719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:50.983737 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:29:50.983748 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:29:50.983777 systemd-journald[1148]: Collecting audit messages is disabled. Feb 13 19:29:50.983799 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:29:50.983809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:29:50.983820 systemd-journald[1148]: Journal started Feb 13 19:29:50.983843 systemd-journald[1148]: Runtime Journal (/run/log/journal/b714e83d2a96491cbb37fb83b59fb4be) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:29:50.986312 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:50.987221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:29:50.988106 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:29:50.989131 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:29:50.990300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:50.991417 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:29:50.991581 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:29:50.992681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:50.992846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:50.993935 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:50.994096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:50.995082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:50.995233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:50.996387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:29:50.996540 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:29:50.997808 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:50.998029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:50.999350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:51.000587 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:29:51.001808 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:29:51.012708 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:29:51.023962 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:29:51.025744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:29:51.026607 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:29:51.029471 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:29:51.032140 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:29:51.033080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:51.034159 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:29:51.035088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:51.036555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:51.042231 systemd-journald[1148]: Time spent on flushing to /var/log/journal/b714e83d2a96491cbb37fb83b59fb4be is 19.571ms for 850 entries. Feb 13 19:29:51.042231 systemd-journald[1148]: System Journal (/var/log/journal/b714e83d2a96491cbb37fb83b59fb4be) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:29:51.078275 systemd-journald[1148]: Received client request to flush runtime journal. Feb 13 19:29:51.043366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:51.045611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:51.047124 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:29:51.048103 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:29:51.049227 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:29:51.054932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:29:51.058038 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:29:51.067873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:51.069350 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 19:29:51.069360 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 19:29:51.070587 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:29:51.073057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:51.085035 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:29:51.086273 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:29:51.106664 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:29:51.125048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:51.137857 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 19:29:51.137876 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 19:29:51.141735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:51.462451 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:29:51.474083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:51.494046 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Feb 13 19:29:51.507639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:51.517142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:51.543914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1235) Feb 13 19:29:51.546108 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:29:51.573742 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 19:29:51.586618 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:29:51.596345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:29:51.647639 systemd-networkd[1228]: lo: Link UP Feb 13 19:29:51.647648 systemd-networkd[1228]: lo: Gained carrier Feb 13 19:29:51.648417 systemd-networkd[1228]: Enumeration completed Feb 13 19:29:51.651962 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:51.655262 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:51.655274 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:51.655992 systemd-networkd[1228]: eth0: Link UP Feb 13 19:29:51.656003 systemd-networkd[1228]: eth0: Gained carrier Feb 13 19:29:51.656016 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:51.665162 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:29:51.668044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:51.673945 systemd-networkd[1228]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:29:51.677559 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:29:51.680174 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:29:51.700499 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:51.720110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:51.741964 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:29:51.743469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:51.753032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:29:51.757460 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:51.795352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:29:51.796498 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:29:51.797457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:29:51.797488 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:51.798239 systemd[1]: Reached target machines.target - Containers. Feb 13 19:29:51.799955 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:29:51.812033 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:29:51.814081 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:29:51.814957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:51.815924 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:29:51.817904 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:29:51.821090 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:29:51.823019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:29:51.832015 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 19:29:51.832210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:29:51.839560 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:29:51.841618 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:29:51.845050 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:29:51.874916 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 19:29:51.918925 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:29:51.954953 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 19:29:51.959943 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 19:29:51.963935 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:29:51.968203 (sd-merge)[1286]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:29:51.968594 (sd-merge)[1286]: Merged extensions into '/usr'. Feb 13 19:29:51.972873 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:29:51.972911 systemd[1]: Reloading... Feb 13 19:29:52.019927 zram_generator::config[1319]: No configuration found. Feb 13 19:29:52.053405 ldconfig[1270]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:29:52.113977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:52.157972 systemd[1]: Reloading finished in 184 ms. Feb 13 19:29:52.174835 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:29:52.176054 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:29:52.199068 systemd[1]: Starting ensure-sysext.service... Feb 13 19:29:52.200943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:52.205220 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:29:52.205236 systemd[1]: Reloading... Feb 13 19:29:52.218108 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:29:52.218373 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:29:52.219024 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:29:52.219247 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Feb 13 19:29:52.219298 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Feb 13 19:29:52.221836 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:52.221850 systemd-tmpfiles[1358]: Skipping /boot Feb 13 19:29:52.229176 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:52.229194 systemd-tmpfiles[1358]: Skipping /boot Feb 13 19:29:52.253918 zram_generator::config[1390]: No configuration found. Feb 13 19:29:52.339957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:52.384215 systemd[1]: Reloading finished in 178 ms. Feb 13 19:29:52.400758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:52.420972 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:29:52.423184 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:29:52.425238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:29:52.428812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:52.433457 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:29:52.439269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:52.440431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:52.444662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:52.454157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:52.459097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:52.459874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:52.460098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:52.463177 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:52.463329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:52.467789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:52.473421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:52.478301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:52.479378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:52.480422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:29:52.484408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:52.484563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:52.486408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:52.486558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:52.488193 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:52.488377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:52.491742 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:29:52.499735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:52.523267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:52.528129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:52.529807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:52.533968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:52.534803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:52.535220 augenrules[1471]: No rules Feb 13 19:29:52.537737 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:29:52.542149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:29:52.548565 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:29:52.549962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:52.550120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:52.551334 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:52.551474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:52.551903 systemd-resolved[1433]: Positive Trust Anchors: Feb 13 19:29:52.551921 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:52.551954 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:52.552647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:52.552803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:52.554118 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:52.554291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:52.561540 systemd[1]: Finished ensure-sysext.service. Feb 13 19:29:52.564739 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:29:52.566710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:52.566810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:52.569200 systemd-resolved[1433]: Defaulting to hostname 'linux'. Feb 13 19:29:52.578095 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:29:52.578957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:29:52.579251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:52.580472 systemd[1]: Reached target network.target - Network. Feb 13 19:29:52.581319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:52.627435 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:29:52.628674 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:29:52.628736 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-02-13 19:29:52.799644 UTC. Feb 13 19:29:52.628876 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:52.629784 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:29:52.630778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:29:52.631740 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:29:52.632724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:29:52.632762 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:52.633465 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:29:52.634359 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:29:52.635279 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:29:52.636252 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:52.637555 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:29:52.639839 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:29:52.641860 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:29:52.654942 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:29:52.655790 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:52.656556 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:52.657440 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:29:52.657490 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:52.657511 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:52.658685 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:29:52.660654 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:29:52.662443 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:29:52.666100 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:29:52.666950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:29:52.669077 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:29:52.676694 jq[1503]: false Feb 13 19:29:52.677426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:29:52.682631 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:29:52.685253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:29:52.686814 dbus-daemon[1502]: [system] SELinux support is enabled Feb 13 19:29:52.692127 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:29:52.695427 extend-filesystems[1505]: Found loop3 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found loop4 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found loop5 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda1 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda2 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda3 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found usr Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda4 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda6 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda7 Feb 13 19:29:52.696987 extend-filesystems[1505]: Found vda9 Feb 13 19:29:52.696987 extend-filesystems[1505]: Checking size of /dev/vda9 Feb 13 19:29:52.698838 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:29:52.700965 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:29:52.708411 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:29:52.709771 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:29:52.717951 jq[1527]: true Feb 13 19:29:52.718192 extend-filesystems[1505]: Resized partition /dev/vda9 Feb 13 19:29:52.716286 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:29:52.716496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:29:52.716741 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:29:52.716945 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:29:52.724780 extend-filesystems[1531]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:29:52.725494 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:29:52.725693 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:29:52.730929 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:29:52.739653 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:29:52.747284 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:29:52.747314 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:29:52.749401 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:29:52.749420 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:29:52.756383 jq[1535]: true Feb 13 19:29:52.762510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1223) Feb 13 19:29:52.762550 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:29:52.779094 update_engine[1525]: I20250213 19:29:52.778771 1525 main.cc:92] Flatcar Update Engine starting Feb 13 19:29:52.783950 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:29:52.783950 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:29:52.783950 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:29:52.788568 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Feb 13 19:29:52.786756 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:29:52.792982 tar[1533]: linux-arm64/helm Feb 13 19:29:52.787008 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:29:52.792110 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:29:52.793003 systemd-logind[1518]: New seat seat0. Feb 13 19:29:52.794099 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:29:52.795233 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:29:52.796799 update_engine[1525]: I20250213 19:29:52.796367 1525 update_check_scheduler.cc:74] Next update check in 7m11s Feb 13 19:29:52.797849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:29:52.805156 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:29:52.841920 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:29:52.843571 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:29:52.848050 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:29:52.867332 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:29:52.979945 containerd[1536]: time="2025-02-13T19:29:52.979839040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:29:53.006024 containerd[1536]: time="2025-02-13T19:29:53.005719435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.007302 containerd[1536]: time="2025-02-13T19:29:53.007264991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007451699Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007478664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007647887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007667456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007727554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:53.007930 containerd[1536]: time="2025-02-13T19:29:53.007740832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008091 containerd[1536]: time="2025-02-13T19:29:53.008000876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008091 containerd[1536]: time="2025-02-13T19:29:53.008018894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008091 containerd[1536]: time="2025-02-13T19:29:53.008034950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008091 containerd[1536]: time="2025-02-13T19:29:53.008045204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008164 containerd[1536]: time="2025-02-13T19:29:53.008128508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008358 containerd[1536]: time="2025-02-13T19:29:53.008320651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008938 containerd[1536]: time="2025-02-13T19:29:53.008454207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:53.008938 containerd[1536]: time="2025-02-13T19:29:53.008474757Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:29:53.008938 containerd[1536]: time="2025-02-13T19:29:53.008548133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:29:53.008938 containerd[1536]: time="2025-02-13T19:29:53.008584535Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:29:53.012194 containerd[1536]: time="2025-02-13T19:29:53.012014501Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:29:53.012194 containerd[1536]: time="2025-02-13T19:29:53.012059524Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:29:53.012194 containerd[1536]: time="2025-02-13T19:29:53.012075580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:29:53.012194 containerd[1536]: time="2025-02-13T19:29:53.012106344Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:29:53.012194 containerd[1536]: time="2025-02-13T19:29:53.012124770Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:29:53.012337 containerd[1536]: time="2025-02-13T19:29:53.012257958Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:29:53.013794 containerd[1536]: time="2025-02-13T19:29:53.013403827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:29:53.013794 containerd[1536]: time="2025-02-13T19:29:53.013694267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:29:53.013794 containerd[1536]: time="2025-02-13T19:29:53.013716656Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:29:53.014183 containerd[1536]: time="2025-02-13T19:29:53.013772627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:29:53.014340 containerd[1536]: time="2025-02-13T19:29:53.014316943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014457 containerd[1536]: time="2025-02-13T19:29:53.014440489Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014583 containerd[1536]: time="2025-02-13T19:29:53.014508676Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014875 containerd[1536]: time="2025-02-13T19:29:53.014533925Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014875 containerd[1536]: time="2025-02-13T19:29:53.014750213Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014875 containerd[1536]: time="2025-02-13T19:29:53.014775502Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.014875 containerd[1536]: time="2025-02-13T19:29:53.014793356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.015063 systemd-networkd[1228]: eth0: Gained IPv6LL Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015682408Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015911280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015933383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015946824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015959693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015979018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.015992459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016004471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016028616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016051659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016067388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016080094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016092555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016107835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017348 containerd[1536]: time="2025-02-13T19:29:53.016125648Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:29:53.017333 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016149385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016162336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016173407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016289600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016307413Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016318280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016330128Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016339689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016352844Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:29:53.017735 containerd[1536]: time="2025-02-13T19:29:53.016363425Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:29:53.017990 containerd[1536]: time="2025-02-13T19:29:53.017965934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:29:53.018734 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.018434054Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.018499545Z" level=info msg="Connect containerd service" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.018642947Z" level=info msg="using legacy CRI server" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.018651036Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.018755381Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.019368619Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.019934383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.019978915Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020190954Z" level=info msg="Start subscribing containerd event" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020226376Z" level=info msg="Start recovering state" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020287373Z" level=info msg="Start event monitor" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020297137Z" level=info msg="Start snapshots syncer" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020305880Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020318464Z" level=info msg="Start streaming server" Feb 13 19:29:53.021468 containerd[1536]: time="2025-02-13T19:29:53.020486011Z" level=info msg="containerd successfully booted in 0.044096s" Feb 13 19:29:53.023820 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:29:53.027301 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:29:53.031956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:29:53.035488 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:29:53.036950 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:29:53.053172 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:29:53.060364 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:29:53.060602 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:29:53.063978 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:29:53.064745 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:29:53.078310 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:29:53.086272 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:29:53.086490 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:29:53.094193 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:29:53.106883 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:29:53.116202 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:29:53.118262 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:29:53.119332 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:29:53.157970 tar[1533]: linux-arm64/LICENSE Feb 13 19:29:53.158075 tar[1533]: linux-arm64/README.md Feb 13 19:29:53.172330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:29:53.551113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:29:53.552382 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:29:53.553773 systemd[1]: Startup finished in 5.311s (kernel) + 3.130s (userspace) = 8.441s. Feb 13 19:29:53.556384 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:29:54.051846 kubelet[1639]: E0213 19:29:54.051791 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:29:54.054257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:29:54.054448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:29:58.529478 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:29:58.538127 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:51500.service - OpenSSH per-connection server daemon (10.0.0.1:51500). Feb 13 19:29:58.579828 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 51500 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:58.581423 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:58.588473 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:29:58.598101 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:29:58.599971 systemd-logind[1518]: New session 1 of user core. Feb 13 19:29:58.607022 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:29:58.609043 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:29:58.615217 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:29:58.684959 systemd[1659]: Queued start job for default target default.target. Feb 13 19:29:58.685290 systemd[1659]: Created slice app.slice - User Application Slice. Feb 13 19:29:58.685314 systemd[1659]: Reached target paths.target - Paths. Feb 13 19:29:58.685327 systemd[1659]: Reached target timers.target - Timers. Feb 13 19:29:58.697970 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:29:58.703327 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:29:58.703394 systemd[1659]: Reached target sockets.target - Sockets. Feb 13 19:29:58.703406 systemd[1659]: Reached target basic.target - Basic System. Feb 13 19:29:58.703440 systemd[1659]: Reached target default.target - Main User Target. Feb 13 19:29:58.703463 systemd[1659]: Startup finished in 81ms. Feb 13 19:29:58.703652 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:29:58.705593 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:29:58.761269 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:51508.service - OpenSSH per-connection server daemon (10.0.0.1:51508). Feb 13 19:29:58.799296 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 51508 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:58.800516 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:58.804291 systemd-logind[1518]: New session 2 of user core. Feb 13 19:29:58.811178 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:29:58.863945 sshd[1671]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:58.872118 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:51522.service - OpenSSH per-connection server daemon (10.0.0.1:51522). Feb 13 19:29:58.872571 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:51508.service: Deactivated successfully. Feb 13 19:29:58.873972 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:29:58.875113 systemd-logind[1518]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:29:58.876112 systemd-logind[1518]: Removed session 2. Feb 13 19:29:58.901701 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 51522 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:58.903161 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:58.906959 systemd-logind[1518]: New session 3 of user core. Feb 13 19:29:58.916135 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:29:58.963573 sshd[1676]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:58.974119 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:51534.service - OpenSSH per-connection server daemon (10.0.0.1:51534). Feb 13 19:29:58.974584 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:51522.service: Deactivated successfully. Feb 13 19:29:58.975975 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:29:58.976932 systemd-logind[1518]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:29:58.978119 systemd-logind[1518]: Removed session 3. Feb 13 19:29:59.003837 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 51534 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:59.005041 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:59.008714 systemd-logind[1518]: New session 4 of user core. Feb 13 19:29:59.025118 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:29:59.077040 sshd[1684]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:59.092118 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Feb 13 19:29:59.092482 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:51534.service: Deactivated successfully. Feb 13 19:29:59.094133 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:29:59.094677 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:29:59.095957 systemd-logind[1518]: Removed session 4. Feb 13 19:29:59.122166 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:59.123313 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:59.126967 systemd-logind[1518]: New session 5 of user core. Feb 13 19:29:59.139162 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:29:59.207627 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:29:59.207928 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.221759 sudo[1699]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.223278 sshd[1692]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:59.233130 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Feb 13 19:29:59.233769 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:51544.service: Deactivated successfully. Feb 13 19:29:59.235943 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:29:59.235986 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:29:59.237127 systemd-logind[1518]: Removed session 5. Feb 13 19:29:59.262733 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:59.263886 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:59.267249 systemd-logind[1518]: New session 6 of user core. Feb 13 19:29:59.276119 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:29:59.326726 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:29:59.327015 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.329797 sudo[1709]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.334209 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:29:59.334471 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.350116 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:29:59.351348 auditctl[1712]: No rules Feb 13 19:29:59.352143 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:59.352363 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:29:59.353936 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:29:59.376289 augenrules[1731]: No rules Feb 13 19:29:59.377456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:29:59.378727 sudo[1708]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.380508 sshd[1701]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:59.395115 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:51560.service - OpenSSH per-connection server daemon (10.0.0.1:51560). Feb 13 19:29:59.395472 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:51556.service: Deactivated successfully. Feb 13 19:29:59.396833 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:29:59.397837 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:29:59.398851 systemd-logind[1518]: Removed session 6. Feb 13 19:29:59.426473 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 51560 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:29:59.427638 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:59.431444 systemd-logind[1518]: New session 7 of user core. Feb 13 19:29:59.440115 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:29:59.489944 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:29:59.490211 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.788134 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:29:59.788353 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:30:00.043688 dockerd[1763]: time="2025-02-13T19:30:00.043560153Z" level=info msg="Starting up" Feb 13 19:30:00.286032 dockerd[1763]: time="2025-02-13T19:30:00.285987424Z" level=info msg="Loading containers: start." Feb 13 19:30:00.368940 kernel: Initializing XFRM netlink socket Feb 13 19:30:00.428642 systemd-networkd[1228]: docker0: Link UP Feb 13 19:30:00.448190 dockerd[1763]: time="2025-02-13T19:30:00.448142355Z" level=info msg="Loading containers: done." Feb 13 19:30:00.465100 dockerd[1763]: time="2025-02-13T19:30:00.465048531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:30:00.465236 dockerd[1763]: time="2025-02-13T19:30:00.465142958Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:30:00.465264 dockerd[1763]: time="2025-02-13T19:30:00.465244644Z" level=info msg="Daemon has completed initialization" Feb 13 19:30:00.466147 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4160997178-merged.mount: Deactivated successfully. Feb 13 19:30:00.497131 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:30:00.497469 dockerd[1763]: time="2025-02-13T19:30:00.496905159Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:30:01.237846 containerd[1536]: time="2025-02-13T19:30:01.237791292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:30:01.837003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992336817.mount: Deactivated successfully. Feb 13 19:30:03.642634 containerd[1536]: time="2025-02-13T19:30:03.642572548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.643024 containerd[1536]: time="2025-02-13T19:30:03.642936987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:30:03.643930 containerd[1536]: time="2025-02-13T19:30:03.643890963Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.646715 containerd[1536]: time="2025-02-13T19:30:03.646640142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.647826 containerd[1536]: time="2025-02-13T19:30:03.647784704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.409948701s" Feb 13 19:30:03.647864 containerd[1536]: time="2025-02-13T19:30:03.647824969Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:30:03.665718 containerd[1536]: time="2025-02-13T19:30:03.665571278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:30:04.305004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:30:04.314061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:04.400432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:04.404229 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:30:04.442101 kubelet[1989]: E0213 19:30:04.442050 1989 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:30:04.445370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:30:04.445528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:30:05.634856 containerd[1536]: time="2025-02-13T19:30:05.634805981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:05.635823 containerd[1536]: time="2025-02-13T19:30:05.635618947Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:30:05.637065 containerd[1536]: time="2025-02-13T19:30:05.637019834Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:05.639510 containerd[1536]: time="2025-02-13T19:30:05.639463112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:05.641106 containerd[1536]: time="2025-02-13T19:30:05.640969693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.975361863s" Feb 13 19:30:05.641106 containerd[1536]: time="2025-02-13T19:30:05.641008861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:30:05.658976 containerd[1536]: time="2025-02-13T19:30:05.658944495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:30:07.735161 containerd[1536]: time="2025-02-13T19:30:07.735112862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.736154 containerd[1536]: time="2025-02-13T19:30:07.735692125Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:30:07.736680 containerd[1536]: time="2025-02-13T19:30:07.736646703Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.740061 containerd[1536]: time="2025-02-13T19:30:07.740028858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.740734 containerd[1536]: time="2025-02-13T19:30:07.740700907Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 2.081720307s" Feb 13 19:30:07.740783 containerd[1536]: time="2025-02-13T19:30:07.740732973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:30:07.757983 containerd[1536]: time="2025-02-13T19:30:07.757953609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:30:09.159334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306307019.mount: Deactivated successfully. Feb 13 19:30:09.471687 containerd[1536]: time="2025-02-13T19:30:09.471467229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.472586 containerd[1536]: time="2025-02-13T19:30:09.472381732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:30:09.473271 containerd[1536]: time="2025-02-13T19:30:09.473212664Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.474971 containerd[1536]: time="2025-02-13T19:30:09.474939533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.475775 containerd[1536]: time="2025-02-13T19:30:09.475741432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.71774738s" Feb 13 19:30:09.475833 containerd[1536]: time="2025-02-13T19:30:09.475780250Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:30:09.494763 containerd[1536]: time="2025-02-13T19:30:09.494707393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:30:10.628512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907572894.mount: Deactivated successfully. Feb 13 19:30:11.249759 containerd[1536]: time="2025-02-13T19:30:11.249715077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.250708 containerd[1536]: time="2025-02-13T19:30:11.250485403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:30:11.251501 containerd[1536]: time="2025-02-13T19:30:11.251465254Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.257177 containerd[1536]: time="2025-02-13T19:30:11.257140362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.258645 containerd[1536]: time="2025-02-13T19:30:11.258429809Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.763682722s" Feb 13 19:30:11.258645 containerd[1536]: time="2025-02-13T19:30:11.258462432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:30:11.276982 containerd[1536]: time="2025-02-13T19:30:11.276949016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:30:11.743197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770044992.mount: Deactivated successfully. Feb 13 19:30:11.746303 containerd[1536]: time="2025-02-13T19:30:11.746261389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.746891 containerd[1536]: time="2025-02-13T19:30:11.746854573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:30:11.747497 containerd[1536]: time="2025-02-13T19:30:11.747463147Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.749521 containerd[1536]: time="2025-02-13T19:30:11.749487332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.750341 containerd[1536]: time="2025-02-13T19:30:11.750310199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 473.32291ms" Feb 13 19:30:11.750378 containerd[1536]: time="2025-02-13T19:30:11.750341620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:30:11.768438 containerd[1536]: time="2025-02-13T19:30:11.768406750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:30:12.334233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495704607.mount: Deactivated successfully. Feb 13 19:30:14.696046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:30:14.704114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:14.790005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:14.793809 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:30:14.830843 kubelet[2155]: E0213 19:30:14.830792 2155 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:30:14.833362 containerd[1536]: time="2025-02-13T19:30:14.833324780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:14.833590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:30:14.833824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:30:14.834754 containerd[1536]: time="2025-02-13T19:30:14.834142998Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:30:14.835667 containerd[1536]: time="2025-02-13T19:30:14.835627598Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:14.839932 containerd[1536]: time="2025-02-13T19:30:14.839876813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:14.841417 containerd[1536]: time="2025-02-13T19:30:14.841382360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.072943273s" Feb 13 19:30:14.841468 containerd[1536]: time="2025-02-13T19:30:14.841417886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:30:19.762072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:19.774257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:19.794419 systemd[1]: Reloading requested from client PID 2243 ('systemctl') (unit session-7.scope)... Feb 13 19:30:19.794437 systemd[1]: Reloading... Feb 13 19:30:19.852923 zram_generator::config[2282]: No configuration found. Feb 13 19:30:19.950240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:19.998649 systemd[1]: Reloading finished in 203 ms. Feb 13 19:30:20.032077 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:30:20.032167 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:30:20.032465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:20.034127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:20.122012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:20.126456 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:20.165761 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:20.165761 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:20.165761 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:20.166132 kubelet[2340]: I0213 19:30:20.165858 2340 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:20.967916 kubelet[2340]: I0213 19:30:20.967861 2340 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:30:20.967916 kubelet[2340]: I0213 19:30:20.967905 2340 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:20.968119 kubelet[2340]: I0213 19:30:20.968102 2340 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:30:21.025792 kubelet[2340]: E0213 19:30:21.025755 2340 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.025945 kubelet[2340]: I0213 19:30:21.025915 2340 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:21.034325 kubelet[2340]: I0213 19:30:21.034301 2340 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:21.034790 kubelet[2340]: I0213 19:30:21.034771 2340 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:21.034971 kubelet[2340]: I0213 19:30:21.034792 2340 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:30:21.035058 kubelet[2340]: I0213 19:30:21.035035 2340 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:21.035058 kubelet[2340]: I0213 19:30:21.035045 2340 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:30:21.035289 kubelet[2340]: I0213 19:30:21.035274 2340 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:21.036249 kubelet[2340]: I0213 19:30:21.036233 2340 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:30:21.036299 kubelet[2340]: I0213 19:30:21.036253 2340 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:21.036736 kubelet[2340]: I0213 19:30:21.036379 2340 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:30:21.036736 kubelet[2340]: I0213 19:30:21.036392 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:21.036967 kubelet[2340]: W0213 19:30:21.036924 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.036967 kubelet[2340]: W0213 19:30:21.036939 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.037033 kubelet[2340]: E0213 19:30:21.036977 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.037033 kubelet[2340]: E0213 19:30:21.036985 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.037493 kubelet[2340]: I0213 19:30:21.037474 2340 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:30:21.037939 kubelet[2340]: I0213 19:30:21.037922 2340 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:21.038037 kubelet[2340]: W0213 19:30:21.038023 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:30:21.038755 kubelet[2340]: I0213 19:30:21.038732 2340 server.go:1264] "Started kubelet" Feb 13 19:30:21.039368 kubelet[2340]: I0213 19:30:21.039329 2340 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:21.041170 kubelet[2340]: I0213 19:30:21.040347 2340 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:30:21.041170 kubelet[2340]: E0213 19:30:21.040695 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db50c4ad5fe5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:21.038706661 +0000 UTC m=+0.909272136,LastTimestamp:2025-02-13 19:30:21.038706661 +0000 UTC m=+0.909272136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:21.041911 kubelet[2340]: I0213 19:30:21.041758 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:21.045122 kubelet[2340]: I0213 19:30:21.042129 2340 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:21.045122 kubelet[2340]: I0213 19:30:21.042259 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:21.047447 kubelet[2340]: E0213 19:30:21.046221 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:21.047447 kubelet[2340]: I0213 19:30:21.046390 2340 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:30:21.047447 kubelet[2340]: I0213 19:30:21.046456 2340 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:30:21.047447 kubelet[2340]: I0213 19:30:21.046536 2340 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:21.047447 kubelet[2340]: E0213 19:30:21.046697 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Feb 13 19:30:21.047447 kubelet[2340]: W0213 19:30:21.046765 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.047447 kubelet[2340]: E0213 19:30:21.046797 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.047654 kubelet[2340]: I0213 19:30:21.047643 2340 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:21.049447 kubelet[2340]: I0213 19:30:21.047712 2340 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:21.049447 kubelet[2340]: E0213 19:30:21.048804 2340 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:30:21.051144 kubelet[2340]: I0213 19:30:21.051122 2340 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:21.062804 kubelet[2340]: I0213 19:30:21.062756 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:21.063934 kubelet[2340]: I0213 19:30:21.063871 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:21.064053 kubelet[2340]: I0213 19:30:21.064040 2340 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:30:21.064089 kubelet[2340]: I0213 19:30:21.064063 2340 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:30:21.064237 kubelet[2340]: E0213 19:30:21.064110 2340 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:21.064587 kubelet[2340]: W0213 19:30:21.064549 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.064642 kubelet[2340]: E0213 19:30:21.064593 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:21.071086 kubelet[2340]: I0213 19:30:21.071064 2340 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:30:21.071086 kubelet[2340]: I0213 19:30:21.071081 2340 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:21.071173 kubelet[2340]: I0213 19:30:21.071098 2340 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:21.148676 kubelet[2340]: I0213 19:30:21.148624 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:21.149007 kubelet[2340]: E0213 19:30:21.148978 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 19:30:21.160257 kubelet[2340]: I0213 19:30:21.160228 2340 policy_none.go:49] "None policy: Start" Feb 13 19:30:21.160908 kubelet[2340]: I0213 19:30:21.160880 2340 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:30:21.161154 kubelet[2340]: I0213 19:30:21.161016 2340 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:21.164451 kubelet[2340]: E0213 19:30:21.164424 2340 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:30:21.166191 kubelet[2340]: I0213 19:30:21.165405 2340 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:21.166191 kubelet[2340]: I0213 19:30:21.165569 2340 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:21.166191 kubelet[2340]: I0213 19:30:21.165659 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:21.167221 kubelet[2340]: E0213 19:30:21.167202 2340 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:30:21.248747 kubelet[2340]: E0213 19:30:21.248067 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Feb 13 19:30:21.350417 kubelet[2340]: I0213 19:30:21.350373 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:21.350672 kubelet[2340]: E0213 19:30:21.350648 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 19:30:21.364865 kubelet[2340]: I0213 19:30:21.364812 2340 topology_manager.go:215] "Topology Admit Handler" podUID="732c49403160d0cb43a89f0469f4ebcd" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:30:21.365949 kubelet[2340]: I0213 19:30:21.365913 2340 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:30:21.367203 kubelet[2340]: I0213 19:30:21.366668 2340 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:30:21.449230 kubelet[2340]: I0213 19:30:21.449192 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:21.449230 kubelet[2340]: I0213 19:30:21.449232 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:21.449448 kubelet[2340]: I0213 19:30:21.449252 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:21.449448 kubelet[2340]: I0213 19:30:21.449268 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:21.449448 kubelet[2340]: I0213 19:30:21.449291 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:21.449448 kubelet[2340]: I0213 19:30:21.449305 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:21.449448 kubelet[2340]: I0213 19:30:21.449320 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:21.449586 kubelet[2340]: I0213 19:30:21.449336 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:30:21.449586 kubelet[2340]: I0213 19:30:21.449357 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:21.648948 kubelet[2340]: E0213 19:30:21.648808 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Feb 13 19:30:21.673244 kubelet[2340]: E0213 19:30:21.670292 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:21.673244 kubelet[2340]: E0213 19:30:21.671387 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:21.673388 kubelet[2340]: E0213 19:30:21.673360 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:21.674496 containerd[1536]: time="2025-02-13T19:30:21.674455651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:21.674866 containerd[1536]: time="2025-02-13T19:30:21.674484626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:732c49403160d0cb43a89f0469f4ebcd,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:21.675110 containerd[1536]: time="2025-02-13T19:30:21.674922888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:21.752336 kubelet[2340]: I0213 19:30:21.752307 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:21.752800 kubelet[2340]: E0213 19:30:21.752771 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 19:30:22.118600 kubelet[2340]: W0213 19:30:22.118468 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.118750 kubelet[2340]: E0213 19:30:22.118735 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.150834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176659856.mount: Deactivated successfully. Feb 13 19:30:22.155509 containerd[1536]: time="2025-02-13T19:30:22.155318285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:22.155973 containerd[1536]: time="2025-02-13T19:30:22.155943923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:30:22.156581 containerd[1536]: time="2025-02-13T19:30:22.156549392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:22.157982 containerd[1536]: time="2025-02-13T19:30:22.157943252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:22.158401 containerd[1536]: time="2025-02-13T19:30:22.157948775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:22.159558 containerd[1536]: time="2025-02-13T19:30:22.159508148Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:22.159613 containerd[1536]: time="2025-02-13T19:30:22.159581421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:22.163802 containerd[1536]: time="2025-02-13T19:30:22.163758798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:22.164677 containerd[1536]: time="2025-02-13T19:30:22.164637069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.100097ms" Feb 13 19:30:22.167368 containerd[1536]: time="2025-02-13T19:30:22.167339871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.103384ms" Feb 13 19:30:22.168267 containerd[1536]: time="2025-02-13T19:30:22.168209018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.18796ms" Feb 13 19:30:22.286639 containerd[1536]: time="2025-02-13T19:30:22.286409985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:22.286639 containerd[1536]: time="2025-02-13T19:30:22.286557411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:22.286639 containerd[1536]: time="2025-02-13T19:30:22.286574018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.286939 containerd[1536]: time="2025-02-13T19:30:22.286851542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:22.286939 containerd[1536]: time="2025-02-13T19:30:22.286904485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:22.286939 containerd[1536]: time="2025-02-13T19:30:22.286915930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.287624 containerd[1536]: time="2025-02-13T19:30:22.287556415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.289004 containerd[1536]: time="2025-02-13T19:30:22.288090453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.289116 containerd[1536]: time="2025-02-13T19:30:22.289042996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:22.289116 containerd[1536]: time="2025-02-13T19:30:22.289083535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:22.289116 containerd[1536]: time="2025-02-13T19:30:22.289093979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.289208 containerd[1536]: time="2025-02-13T19:30:22.289162330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.303855 kubelet[2340]: W0213 19:30:22.303796 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.303855 kubelet[2340]: E0213 19:30:22.303855 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.332710 containerd[1536]: time="2025-02-13T19:30:22.332656793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"2773479e710cbc16697bc4574d7364821107ae80828aeb9f3639b9ab69f61a6b\"" Feb 13 19:30:22.333582 kubelet[2340]: E0213 19:30:22.333560 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.338589 containerd[1536]: time="2025-02-13T19:30:22.337683749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5bcede05735885cb67300cfe017fd3942305bae4511c71eb0bda11ecfd1d4d7\"" Feb 13 19:30:22.338818 containerd[1536]: time="2025-02-13T19:30:22.338789400Z" level=info msg="CreateContainer within sandbox \"2773479e710cbc16697bc4574d7364821107ae80828aeb9f3639b9ab69f61a6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:30:22.339823 kubelet[2340]: E0213 19:30:22.339780 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.341127 containerd[1536]: time="2025-02-13T19:30:22.340950561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:732c49403160d0cb43a89f0469f4ebcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"40a084a4a19d47ad01b66aaaaafbe059ddc11ccc45468a87189bbced67a7a887\"" Feb 13 19:30:22.341941 kubelet[2340]: E0213 19:30:22.341923 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.342025 containerd[1536]: time="2025-02-13T19:30:22.341985982Z" level=info msg="CreateContainer within sandbox \"d5bcede05735885cb67300cfe017fd3942305bae4511c71eb0bda11ecfd1d4d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:30:22.345324 containerd[1536]: time="2025-02-13T19:30:22.345295654Z" level=info msg="CreateContainer within sandbox \"40a084a4a19d47ad01b66aaaaafbe059ddc11ccc45468a87189bbced67a7a887\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:30:22.353345 containerd[1536]: time="2025-02-13T19:30:22.353303495Z" level=info msg="CreateContainer within sandbox \"2773479e710cbc16697bc4574d7364821107ae80828aeb9f3639b9ab69f61a6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"18a5546ab4cb749eb7bd9d1211ddd7d7ddc401e15030f8c2cd597a779dff3907\"" Feb 13 19:30:22.353944 containerd[1536]: time="2025-02-13T19:30:22.353881152Z" level=info msg="StartContainer for \"18a5546ab4cb749eb7bd9d1211ddd7d7ddc401e15030f8c2cd597a779dff3907\"" Feb 13 19:30:22.355915 containerd[1536]: time="2025-02-13T19:30:22.355863434Z" level=info msg="CreateContainer within sandbox \"d5bcede05735885cb67300cfe017fd3942305bae4511c71eb0bda11ecfd1d4d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46b20a1cc2b5f453ce72fd6df6a5a46eadc81e57561ca0a61e3dffaa81c33339\"" Feb 13 19:30:22.356296 containerd[1536]: time="2025-02-13T19:30:22.356244843Z" level=info msg="StartContainer for \"46b20a1cc2b5f453ce72fd6df6a5a46eadc81e57561ca0a61e3dffaa81c33339\"" Feb 13 19:30:22.361206 containerd[1536]: time="2025-02-13T19:30:22.361177237Z" level=info msg="CreateContainer within sandbox \"40a084a4a19d47ad01b66aaaaafbe059ddc11ccc45468a87189bbced67a7a887\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15cba00b45f0e992e34e80257451674e661bae6b5c7497fb1147cc7db6a47a0a\"" Feb 13 19:30:22.361866 containerd[1536]: time="2025-02-13T19:30:22.361842413Z" level=info msg="StartContainer for \"15cba00b45f0e992e34e80257451674e661bae6b5c7497fb1147cc7db6a47a0a\"" Feb 13 19:30:22.432852 containerd[1536]: time="2025-02-13T19:30:22.432753829Z" level=info msg="StartContainer for \"18a5546ab4cb749eb7bd9d1211ddd7d7ddc401e15030f8c2cd597a779dff3907\" returns successfully" Feb 13 19:30:22.433105 containerd[1536]: time="2025-02-13T19:30:22.433083656Z" level=info msg="StartContainer for \"46b20a1cc2b5f453ce72fd6df6a5a46eadc81e57561ca0a61e3dffaa81c33339\" returns successfully" Feb 13 19:30:22.433190 containerd[1536]: time="2025-02-13T19:30:22.433173936Z" level=info msg="StartContainer for \"15cba00b45f0e992e34e80257451674e661bae6b5c7497fb1147cc7db6a47a0a\" returns successfully" Feb 13 19:30:22.449467 kubelet[2340]: E0213 19:30:22.449424 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Feb 13 19:30:22.473299 kubelet[2340]: W0213 19:30:22.473212 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.473299 kubelet[2340]: E0213 19:30:22.473280 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.555590 kubelet[2340]: I0213 19:30:22.555555 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:22.555971 kubelet[2340]: E0213 19:30:22.555880 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 19:30:22.568048 kubelet[2340]: W0213 19:30:22.567996 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:22.568138 kubelet[2340]: E0213 19:30:22.568056 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 19:30:23.073724 kubelet[2340]: E0213 19:30:23.073689 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:23.077453 kubelet[2340]: E0213 19:30:23.077143 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:23.077453 kubelet[2340]: E0213 19:30:23.077374 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:24.081995 kubelet[2340]: E0213 19:30:24.081946 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:24.159052 kubelet[2340]: I0213 19:30:24.158667 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:24.360973 kubelet[2340]: E0213 19:30:24.357473 2340 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:30:24.486966 kubelet[2340]: E0213 19:30:24.486839 2340 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db50c4ad5fe5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:21.038706661 +0000 UTC m=+0.909272136,LastTimestamp:2025-02-13 19:30:21.038706661 +0000 UTC m=+0.909272136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:24.539948 kubelet[2340]: I0213 19:30:24.537945 2340 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:30:24.545039 kubelet[2340]: E0213 19:30:24.544779 2340 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db50c5473d9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:21.048790426 +0000 UTC m=+0.919355901,LastTimestamp:2025-02-13 19:30:21.048790426 +0000 UTC m=+0.919355901,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:25.039222 kubelet[2340]: I0213 19:30:25.038970 2340 apiserver.go:52] "Watching apiserver" Feb 13 19:30:25.046808 kubelet[2340]: I0213 19:30:25.046768 2340 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:30:26.576941 systemd[1]: Reloading requested from client PID 2615 ('systemctl') (unit session-7.scope)... Feb 13 19:30:26.576963 systemd[1]: Reloading... Feb 13 19:30:26.634954 zram_generator::config[2657]: No configuration found. Feb 13 19:30:26.806247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:26.861172 systemd[1]: Reloading finished in 283 ms. Feb 13 19:30:26.890178 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:26.902783 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:30:26.903125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:26.910236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:26.992287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:26.997126 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:27.041912 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:27.041912 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:27.041912 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:27.042246 kubelet[2706]: I0213 19:30:27.041968 2706 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:27.045723 kubelet[2706]: I0213 19:30:27.045694 2706 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:30:27.045723 kubelet[2706]: I0213 19:30:27.045719 2706 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:27.045885 kubelet[2706]: I0213 19:30:27.045870 2706 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:30:27.047129 kubelet[2706]: I0213 19:30:27.047103 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:30:27.048237 kubelet[2706]: I0213 19:30:27.048220 2706 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:27.052816 kubelet[2706]: I0213 19:30:27.052799 2706 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:27.053202 kubelet[2706]: I0213 19:30:27.053179 2706 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:27.053347 kubelet[2706]: I0213 19:30:27.053204 2706 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:30:27.053437 kubelet[2706]: I0213 19:30:27.053353 2706 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:27.053437 kubelet[2706]: I0213 19:30:27.053362 2706 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:30:27.053437 kubelet[2706]: I0213 19:30:27.053393 2706 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:27.053544 kubelet[2706]: I0213 19:30:27.053480 2706 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:30:27.053544 kubelet[2706]: I0213 19:30:27.053492 2706 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:27.053544 kubelet[2706]: I0213 19:30:27.053516 2706 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:30:27.053544 kubelet[2706]: I0213 19:30:27.053530 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:27.054200 kubelet[2706]: I0213 19:30:27.054157 2706 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:30:27.054340 kubelet[2706]: I0213 19:30:27.054308 2706 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:27.057528 kubelet[2706]: I0213 19:30:27.057507 2706 server.go:1264] "Started kubelet" Feb 13 19:30:27.058254 kubelet[2706]: I0213 19:30:27.058191 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:27.058556 kubelet[2706]: I0213 19:30:27.058413 2706 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:27.058556 kubelet[2706]: I0213 19:30:27.058492 2706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:27.059090 kubelet[2706]: I0213 19:30:27.059067 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:27.059832 kubelet[2706]: I0213 19:30:27.059641 2706 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:30:27.060444 kubelet[2706]: I0213 19:30:27.060424 2706 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:30:27.060605 kubelet[2706]: I0213 19:30:27.060592 2706 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:30:27.060845 kubelet[2706]: I0213 19:30:27.060831 2706 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:27.079113 kubelet[2706]: I0213 19:30:27.078995 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:27.079780 kubelet[2706]: I0213 19:30:27.079766 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:27.079851 kubelet[2706]: I0213 19:30:27.079842 2706 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:30:27.079921 kubelet[2706]: I0213 19:30:27.079910 2706 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:30:27.080011 kubelet[2706]: E0213 19:30:27.079990 2706 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:27.086400 kubelet[2706]: I0213 19:30:27.086381 2706 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:27.086470 kubelet[2706]: I0213 19:30:27.086462 2706 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:27.086855 kubelet[2706]: I0213 19:30:27.086705 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:27.089400 kubelet[2706]: E0213 19:30:27.089297 2706 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:30:27.121580 kubelet[2706]: I0213 19:30:27.121495 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:30:27.121580 kubelet[2706]: I0213 19:30:27.121514 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:27.121580 kubelet[2706]: I0213 19:30:27.121534 2706 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:27.121698 kubelet[2706]: I0213 19:30:27.121663 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:30:27.121698 kubelet[2706]: I0213 19:30:27.121672 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:30:27.121698 kubelet[2706]: I0213 19:30:27.121689 2706 policy_none.go:49] "None policy: Start" Feb 13 19:30:27.123184 kubelet[2706]: I0213 19:30:27.123118 2706 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:30:27.123184 kubelet[2706]: I0213 19:30:27.123142 2706 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:27.123310 kubelet[2706]: I0213 19:30:27.123293 2706 state_mem.go:75] "Updated machine memory state" Feb 13 19:30:27.124331 kubelet[2706]: I0213 19:30:27.124314 2706 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:27.125019 kubelet[2706]: I0213 19:30:27.124455 2706 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:27.125019 kubelet[2706]: I0213 19:30:27.124546 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:27.164738 kubelet[2706]: I0213 19:30:27.164715 2706 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:30:27.170821 kubelet[2706]: I0213 19:30:27.170792 2706 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:30:27.170914 kubelet[2706]: I0213 19:30:27.170867 2706 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:30:27.180186 kubelet[2706]: I0213 19:30:27.180135 2706 topology_manager.go:215] "Topology Admit Handler" podUID="732c49403160d0cb43a89f0469f4ebcd" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:30:27.180349 kubelet[2706]: I0213 19:30:27.180226 2706 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:30:27.180349 kubelet[2706]: I0213 19:30:27.180262 2706 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:30:27.362947 kubelet[2706]: I0213 19:30:27.362910 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:27.362947 kubelet[2706]: I0213 19:30:27.362945 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:27.363100 kubelet[2706]: I0213 19:30:27.362967 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:27.363100 kubelet[2706]: I0213 19:30:27.363015 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:27.363100 kubelet[2706]: I0213 19:30:27.363065 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:27.363100 kubelet[2706]: I0213 19:30:27.363084 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:27.363190 kubelet[2706]: I0213 19:30:27.363103 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:30:27.363190 kubelet[2706]: I0213 19:30:27.363119 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/732c49403160d0cb43a89f0469f4ebcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"732c49403160d0cb43a89f0469f4ebcd\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:27.363190 kubelet[2706]: I0213 19:30:27.363135 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:27.505004 kubelet[2706]: E0213 19:30:27.504876 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:27.505004 kubelet[2706]: E0213 19:30:27.504962 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:27.505906 kubelet[2706]: E0213 19:30:27.505310 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:27.577821 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:30:27.578111 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:30:28.009020 sudo[2739]: pam_unix(sudo:session): session closed for user root Feb 13 19:30:28.053935 kubelet[2706]: I0213 19:30:28.053900 2706 apiserver.go:52] "Watching apiserver" Feb 13 19:30:28.061775 kubelet[2706]: I0213 19:30:28.061745 2706 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:30:28.097703 kubelet[2706]: E0213 19:30:28.097657 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.099984 kubelet[2706]: E0213 19:30:28.099109 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.117411 kubelet[2706]: E0213 19:30:28.115466 2706 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:28.117411 kubelet[2706]: E0213 19:30:28.115906 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.138610 kubelet[2706]: I0213 19:30:28.138543 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.131411796 podStartE2EDuration="1.131411796s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:28.131340379 +0000 UTC m=+1.129460960" watchObservedRunningTime="2025-02-13 19:30:28.131411796 +0000 UTC m=+1.129532377" Feb 13 19:30:28.157730 kubelet[2706]: I0213 19:30:28.157625 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.157605242 podStartE2EDuration="1.157605242s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:28.144253738 +0000 UTC m=+1.142374279" watchObservedRunningTime="2025-02-13 19:30:28.157605242 +0000 UTC m=+1.155725823" Feb 13 19:30:29.099323 kubelet[2706]: E0213 19:30:29.099289 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:29.100461 kubelet[2706]: E0213 19:30:29.100105 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:30.001184 sudo[1744]: pam_unix(sudo:session): session closed for user root Feb 13 19:30:30.003376 sshd[1737]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:30.005842 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:51560.service: Deactivated successfully. Feb 13 19:30:30.008247 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:30:30.008549 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:30:30.009691 systemd-logind[1518]: Removed session 7. Feb 13 19:30:30.189559 kubelet[2706]: E0213 19:30:30.189519 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:33.086589 kubelet[2706]: E0213 19:30:33.086508 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:33.100616 kubelet[2706]: I0213 19:30:33.100563 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.10054944 podStartE2EDuration="6.10054944s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:28.15822823 +0000 UTC m=+1.156348771" watchObservedRunningTime="2025-02-13 19:30:33.10054944 +0000 UTC m=+6.098670021" Feb 13 19:30:33.103356 kubelet[2706]: E0213 19:30:33.103324 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:38.113526 update_engine[1525]: I20250213 19:30:38.112958 1525 update_attempter.cc:509] Updating boot flags... Feb 13 19:30:38.138973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2788) Feb 13 19:30:38.167976 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2787) Feb 13 19:30:38.188922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2787) Feb 13 19:30:38.385768 kubelet[2706]: E0213 19:30:38.385653 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:40.197503 kubelet[2706]: E0213 19:30:40.197463 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:42.081796 kubelet[2706]: I0213 19:30:42.081718 2706 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:30:42.082224 kubelet[2706]: I0213 19:30:42.082205 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:30:42.082254 containerd[1536]: time="2025-02-13T19:30:42.082054887Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:30:42.732805 kubelet[2706]: I0213 19:30:42.732685 2706 topology_manager.go:215] "Topology Admit Handler" podUID="d47c4ce2-c61b-41c7-beb3-2a089a59c1e9" podNamespace="kube-system" podName="kube-proxy-9fr6b" Feb 13 19:30:42.733734 kubelet[2706]: I0213 19:30:42.732824 2706 topology_manager.go:215] "Topology Admit Handler" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" podNamespace="kube-system" podName="cilium-28v8x" Feb 13 19:30:42.765451 kubelet[2706]: I0213 19:30:42.765404 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d47c4ce2-c61b-41c7-beb3-2a089a59c1e9-kube-proxy\") pod \"kube-proxy-9fr6b\" (UID: \"d47c4ce2-c61b-41c7-beb3-2a089a59c1e9\") " pod="kube-system/kube-proxy-9fr6b" Feb 13 19:30:42.765451 kubelet[2706]: I0213 19:30:42.765453 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d47c4ce2-c61b-41c7-beb3-2a089a59c1e9-xtables-lock\") pod \"kube-proxy-9fr6b\" (UID: \"d47c4ce2-c61b-41c7-beb3-2a089a59c1e9\") " pod="kube-system/kube-proxy-9fr6b" Feb 13 19:30:42.765607 kubelet[2706]: I0213 19:30:42.765475 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqrl\" (UniqueName: \"kubernetes.io/projected/d47c4ce2-c61b-41c7-beb3-2a089a59c1e9-kube-api-access-txqrl\") pod \"kube-proxy-9fr6b\" (UID: \"d47c4ce2-c61b-41c7-beb3-2a089a59c1e9\") " pod="kube-system/kube-proxy-9fr6b" Feb 13 19:30:42.765607 kubelet[2706]: I0213 19:30:42.765495 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d47c4ce2-c61b-41c7-beb3-2a089a59c1e9-lib-modules\") pod \"kube-proxy-9fr6b\" (UID: \"d47c4ce2-c61b-41c7-beb3-2a089a59c1e9\") " pod="kube-system/kube-proxy-9fr6b" Feb 13 19:30:42.866785 kubelet[2706]: I0213 19:30:42.866690 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/901727a9-745c-4caa-b25a-e6bcd4f54167-clustermesh-secrets\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869646 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-xtables-lock\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869677 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cni-path\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869705 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-bpf-maps\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869721 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-cgroup\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869737 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-config-path\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.869936 kubelet[2706]: I0213 19:30:42.869755 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-net\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869778 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-kernel\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869799 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-run\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869816 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-etc-cni-netd\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869830 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-lib-modules\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869853 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-hubble-tls\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870109 kubelet[2706]: I0213 19:30:42.869877 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sclj\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-kube-api-access-4sclj\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:42.870368 kubelet[2706]: I0213 19:30:42.870256 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-hostproc\") pod \"cilium-28v8x\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " pod="kube-system/cilium-28v8x" Feb 13 19:30:43.035938 kubelet[2706]: E0213 19:30:43.035155 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:43.036418 containerd[1536]: time="2025-02-13T19:30:43.036386568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9fr6b,Uid:d47c4ce2-c61b-41c7-beb3-2a089a59c1e9,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:43.038006 kubelet[2706]: E0213 19:30:43.037982 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:43.038351 containerd[1536]: time="2025-02-13T19:30:43.038311100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28v8x,Uid:901727a9-745c-4caa-b25a-e6bcd4f54167,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:43.061395 containerd[1536]: time="2025-02-13T19:30:43.060927029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:43.061395 containerd[1536]: time="2025-02-13T19:30:43.061204739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:43.061395 containerd[1536]: time="2025-02-13T19:30:43.061243823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.061558 containerd[1536]: time="2025-02-13T19:30:43.061520454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.081645 containerd[1536]: time="2025-02-13T19:30:43.081546898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:43.081988 containerd[1536]: time="2025-02-13T19:30:43.081633227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:43.081988 containerd[1536]: time="2025-02-13T19:30:43.081652909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.081988 containerd[1536]: time="2025-02-13T19:30:43.081743839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.090823 containerd[1536]: time="2025-02-13T19:30:43.090730308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28v8x,Uid:901727a9-745c-4caa-b25a-e6bcd4f54167,Namespace:kube-system,Attempt:0,} returns sandbox id \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\"" Feb 13 19:30:43.095628 kubelet[2706]: E0213 19:30:43.095599 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:43.110808 containerd[1536]: time="2025-02-13T19:30:43.110684344Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:30:43.130410 kubelet[2706]: I0213 19:30:43.128628 2706 topology_manager.go:215] "Topology Admit Handler" podUID="fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" podNamespace="kube-system" podName="cilium-operator-599987898-m9gtf" Feb 13 19:30:43.130552 containerd[1536]: time="2025-02-13T19:30:43.129503375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9fr6b,Uid:d47c4ce2-c61b-41c7-beb3-2a089a59c1e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bdfc1e6a625a80dffd8d3cf73bcdc296d355e16286eed930616b31026a5500e\"" Feb 13 19:30:43.131188 kubelet[2706]: E0213 19:30:43.130708 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:43.135986 containerd[1536]: time="2025-02-13T19:30:43.135942004Z" level=info msg="CreateContainer within sandbox \"6bdfc1e6a625a80dffd8d3cf73bcdc296d355e16286eed930616b31026a5500e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:30:43.174359 kubelet[2706]: I0213 19:30:43.174315 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-cilium-config-path\") pod \"cilium-operator-599987898-m9gtf\" (UID: \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\") " pod="kube-system/cilium-operator-599987898-m9gtf" Feb 13 19:30:43.174483 kubelet[2706]: I0213 19:30:43.174363 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lvx\" (UniqueName: \"kubernetes.io/projected/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-kube-api-access-g2lvx\") pod \"cilium-operator-599987898-m9gtf\" (UID: \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\") " pod="kube-system/cilium-operator-599987898-m9gtf" Feb 13 19:30:43.181777 containerd[1536]: time="2025-02-13T19:30:43.181714081Z" level=info msg="CreateContainer within sandbox \"6bdfc1e6a625a80dffd8d3cf73bcdc296d355e16286eed930616b31026a5500e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ca731b199be0e57f0db85ee925fe348e3748ce344ae4e3218cd61b5d12eeb50\"" Feb 13 19:30:43.184631 containerd[1536]: time="2025-02-13T19:30:43.184587797Z" level=info msg="StartContainer for \"8ca731b199be0e57f0db85ee925fe348e3748ce344ae4e3218cd61b5d12eeb50\"" Feb 13 19:30:43.235293 containerd[1536]: time="2025-02-13T19:30:43.235241692Z" level=info msg="StartContainer for \"8ca731b199be0e57f0db85ee925fe348e3748ce344ae4e3218cd61b5d12eeb50\" returns successfully" Feb 13 19:30:43.447146 kubelet[2706]: E0213 19:30:43.447101 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:43.448108 containerd[1536]: time="2025-02-13T19:30:43.447571939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-m9gtf,Uid:fde9aa7b-8a80-4afb-8a52-9eb79ca4771d,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:43.465987 containerd[1536]: time="2025-02-13T19:30:43.465882794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:43.465987 containerd[1536]: time="2025-02-13T19:30:43.465962963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:43.465987 containerd[1536]: time="2025-02-13T19:30:43.465978724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.466139 containerd[1536]: time="2025-02-13T19:30:43.466060213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:43.505359 containerd[1536]: time="2025-02-13T19:30:43.505252326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-m9gtf,Uid:fde9aa7b-8a80-4afb-8a52-9eb79ca4771d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\"" Feb 13 19:30:43.506791 kubelet[2706]: E0213 19:30:43.506760 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:44.125232 kubelet[2706]: E0213 19:30:44.125179 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:44.133862 kubelet[2706]: I0213 19:30:44.133800 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9fr6b" podStartSLOduration=2.133784123 podStartE2EDuration="2.133784123s" podCreationTimestamp="2025-02-13 19:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:44.133255868 +0000 UTC m=+17.131376449" watchObservedRunningTime="2025-02-13 19:30:44.133784123 +0000 UTC m=+17.131904704" Feb 13 19:30:45.949699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201160095.mount: Deactivated successfully. Feb 13 19:30:48.254242 containerd[1536]: time="2025-02-13T19:30:48.254187870Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:48.254683 containerd[1536]: time="2025-02-13T19:30:48.254657072Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:30:48.255613 containerd[1536]: time="2025-02-13T19:30:48.255566712Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:48.257349 containerd[1536]: time="2025-02-13T19:30:48.257295385Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.146467345s" Feb 13 19:30:48.257349 containerd[1536]: time="2025-02-13T19:30:48.257334308Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:30:48.263011 containerd[1536]: time="2025-02-13T19:30:48.262820033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:30:48.264192 containerd[1536]: time="2025-02-13T19:30:48.264146630Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:30:48.278395 containerd[1536]: time="2025-02-13T19:30:48.277509811Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\"" Feb 13 19:30:48.279158 containerd[1536]: time="2025-02-13T19:30:48.278643711Z" level=info msg="StartContainer for \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\"" Feb 13 19:30:48.322278 containerd[1536]: time="2025-02-13T19:30:48.320377519Z" level=info msg="StartContainer for \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\" returns successfully" Feb 13 19:30:48.529488 containerd[1536]: time="2025-02-13T19:30:48.524495155Z" level=info msg="shim disconnected" id=dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5 namespace=k8s.io Feb 13 19:30:48.529488 containerd[1536]: time="2025-02-13T19:30:48.529164288Z" level=warning msg="cleaning up after shim disconnected" id=dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5 namespace=k8s.io Feb 13 19:30:48.529488 containerd[1536]: time="2025-02-13T19:30:48.529176969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:49.143153 kubelet[2706]: E0213 19:30:49.142910 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:49.151227 containerd[1536]: time="2025-02-13T19:30:49.151005550Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:30:49.191623 containerd[1536]: time="2025-02-13T19:30:49.191571550Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\"" Feb 13 19:30:49.194378 containerd[1536]: time="2025-02-13T19:30:49.192032909Z" level=info msg="StartContainer for \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\"" Feb 13 19:30:49.230887 containerd[1536]: time="2025-02-13T19:30:49.230849561Z" level=info msg="StartContainer for \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\" returns successfully" Feb 13 19:30:49.251374 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:30:49.251626 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:30:49.251684 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:30:49.257231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:30:49.273010 systemd[1]: run-containerd-runc-k8s.io-dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5-runc.32U8qU.mount: Deactivated successfully. Feb 13 19:30:49.273132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5-rootfs.mount: Deactivated successfully. Feb 13 19:30:49.275118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:30:49.276813 containerd[1536]: time="2025-02-13T19:30:49.276757334Z" level=info msg="shim disconnected" id=ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a namespace=k8s.io Feb 13 19:30:49.276813 containerd[1536]: time="2025-02-13T19:30:49.276813859Z" level=warning msg="cleaning up after shim disconnected" id=ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a namespace=k8s.io Feb 13 19:30:49.277296 containerd[1536]: time="2025-02-13T19:30:49.276822780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:49.381602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426386738.mount: Deactivated successfully. Feb 13 19:30:49.858538 containerd[1536]: time="2025-02-13T19:30:49.858492507Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:49.859386 containerd[1536]: time="2025-02-13T19:30:49.859145403Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:30:49.860570 containerd[1536]: time="2025-02-13T19:30:49.860535401Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:49.862258 containerd[1536]: time="2025-02-13T19:30:49.862227904Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.599376868s" Feb 13 19:30:49.862449 containerd[1536]: time="2025-02-13T19:30:49.862343114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:30:49.864925 containerd[1536]: time="2025-02-13T19:30:49.864878609Z" level=info msg="CreateContainer within sandbox \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:30:49.873627 containerd[1536]: time="2025-02-13T19:30:49.873593508Z" level=info msg="CreateContainer within sandbox \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\"" Feb 13 19:30:49.874071 containerd[1536]: time="2025-02-13T19:30:49.874049547Z" level=info msg="StartContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\"" Feb 13 19:30:49.918117 containerd[1536]: time="2025-02-13T19:30:49.918075840Z" level=info msg="StartContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" returns successfully" Feb 13 19:30:50.146212 kubelet[2706]: E0213 19:30:50.146106 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:50.150358 kubelet[2706]: E0213 19:30:50.150179 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:50.163076 containerd[1536]: time="2025-02-13T19:30:50.163035876Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:30:50.181778 kubelet[2706]: I0213 19:30:50.180462 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-m9gtf" podStartSLOduration=0.825103675 podStartE2EDuration="7.180443854s" podCreationTimestamp="2025-02-13 19:30:43 +0000 UTC" firstStartedPulling="2025-02-13 19:30:43.507624868 +0000 UTC m=+16.505745449" lastFinishedPulling="2025-02-13 19:30:49.862965047 +0000 UTC m=+22.861085628" observedRunningTime="2025-02-13 19:30:50.179962455 +0000 UTC m=+23.178083036" watchObservedRunningTime="2025-02-13 19:30:50.180443854 +0000 UTC m=+23.178564395" Feb 13 19:30:50.193556 containerd[1536]: time="2025-02-13T19:30:50.193500798Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\"" Feb 13 19:30:50.196280 containerd[1536]: time="2025-02-13T19:30:50.196238021Z" level=info msg="StartContainer for \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\"" Feb 13 19:30:50.275785 containerd[1536]: time="2025-02-13T19:30:50.275729497Z" level=info msg="StartContainer for \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\" returns successfully" Feb 13 19:30:50.312987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f-rootfs.mount: Deactivated successfully. Feb 13 19:30:50.371673 containerd[1536]: time="2025-02-13T19:30:50.371612828Z" level=info msg="shim disconnected" id=5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f namespace=k8s.io Feb 13 19:30:50.371673 containerd[1536]: time="2025-02-13T19:30:50.371667792Z" level=warning msg="cleaning up after shim disconnected" id=5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f namespace=k8s.io Feb 13 19:30:50.371673 containerd[1536]: time="2025-02-13T19:30:50.371677833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:51.157153 kubelet[2706]: E0213 19:30:51.156986 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:51.159017 containerd[1536]: time="2025-02-13T19:30:51.158904922Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:30:51.161065 kubelet[2706]: E0213 19:30:51.160990 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:51.179032 containerd[1536]: time="2025-02-13T19:30:51.178645588Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\"" Feb 13 19:30:51.180378 containerd[1536]: time="2025-02-13T19:30:51.180091981Z" level=info msg="StartContainer for \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\"" Feb 13 19:30:51.223685 containerd[1536]: time="2025-02-13T19:30:51.223524424Z" level=info msg="StartContainer for \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\" returns successfully" Feb 13 19:30:51.242871 containerd[1536]: time="2025-02-13T19:30:51.242818135Z" level=info msg="shim disconnected" id=e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6 namespace=k8s.io Feb 13 19:30:51.242871 containerd[1536]: time="2025-02-13T19:30:51.242868179Z" level=warning msg="cleaning up after shim disconnected" id=e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6 namespace=k8s.io Feb 13 19:30:51.242871 containerd[1536]: time="2025-02-13T19:30:51.242876700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:51.272962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6-rootfs.mount: Deactivated successfully. Feb 13 19:30:52.159957 kubelet[2706]: E0213 19:30:52.159921 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:52.164453 containerd[1536]: time="2025-02-13T19:30:52.164323293Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:30:52.184802 containerd[1536]: time="2025-02-13T19:30:52.184751473Z" level=info msg="CreateContainer within sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\"" Feb 13 19:30:52.185287 containerd[1536]: time="2025-02-13T19:30:52.185237910Z" level=info msg="StartContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\"" Feb 13 19:30:52.230485 containerd[1536]: time="2025-02-13T19:30:52.230447479Z" level=info msg="StartContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" returns successfully" Feb 13 19:30:52.369320 kubelet[2706]: I0213 19:30:52.369289 2706 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:30:52.391205 kubelet[2706]: I0213 19:30:52.390933 2706 topology_manager.go:215] "Topology Admit Handler" podUID="95827d2d-8ab0-47b4-9d3e-09605adcd13b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sdgww" Feb 13 19:30:52.391837 kubelet[2706]: I0213 19:30:52.391556 2706 topology_manager.go:215] "Topology Admit Handler" podUID="d03bc3ed-3347-4536-8228-40d79700789a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l88nj" Feb 13 19:30:52.548013 kubelet[2706]: I0213 19:30:52.547697 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d03bc3ed-3347-4536-8228-40d79700789a-config-volume\") pod \"coredns-7db6d8ff4d-l88nj\" (UID: \"d03bc3ed-3347-4536-8228-40d79700789a\") " pod="kube-system/coredns-7db6d8ff4d-l88nj" Feb 13 19:30:52.548013 kubelet[2706]: I0213 19:30:52.547750 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhmj\" (UniqueName: \"kubernetes.io/projected/d03bc3ed-3347-4536-8228-40d79700789a-kube-api-access-nnhmj\") pod \"coredns-7db6d8ff4d-l88nj\" (UID: \"d03bc3ed-3347-4536-8228-40d79700789a\") " pod="kube-system/coredns-7db6d8ff4d-l88nj" Feb 13 19:30:52.548013 kubelet[2706]: I0213 19:30:52.547789 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95827d2d-8ab0-47b4-9d3e-09605adcd13b-config-volume\") pod \"coredns-7db6d8ff4d-sdgww\" (UID: \"95827d2d-8ab0-47b4-9d3e-09605adcd13b\") " pod="kube-system/coredns-7db6d8ff4d-sdgww" Feb 13 19:30:52.548013 kubelet[2706]: I0213 19:30:52.547809 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5jsr\" (UniqueName: \"kubernetes.io/projected/95827d2d-8ab0-47b4-9d3e-09605adcd13b-kube-api-access-g5jsr\") pod \"coredns-7db6d8ff4d-sdgww\" (UID: \"95827d2d-8ab0-47b4-9d3e-09605adcd13b\") " pod="kube-system/coredns-7db6d8ff4d-sdgww" Feb 13 19:30:52.697372 kubelet[2706]: E0213 19:30:52.697327 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:52.699501 containerd[1536]: time="2025-02-13T19:30:52.699258430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sdgww,Uid:95827d2d-8ab0-47b4-9d3e-09605adcd13b,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:52.702088 kubelet[2706]: E0213 19:30:52.701586 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:52.703199 containerd[1536]: time="2025-02-13T19:30:52.703172165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l88nj,Uid:d03bc3ed-3347-4536-8228-40d79700789a,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:53.165261 kubelet[2706]: E0213 19:30:53.165171 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:53.186030 kubelet[2706]: I0213 19:30:53.185948 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-28v8x" podStartSLOduration=6.03137002 podStartE2EDuration="11.185932579s" podCreationTimestamp="2025-02-13 19:30:42 +0000 UTC" firstStartedPulling="2025-02-13 19:30:43.108111261 +0000 UTC m=+16.106231842" lastFinishedPulling="2025-02-13 19:30:48.26267382 +0000 UTC m=+21.260794401" observedRunningTime="2025-02-13 19:30:53.185075117 +0000 UTC m=+26.183195698" watchObservedRunningTime="2025-02-13 19:30:53.185932579 +0000 UTC m=+26.184053160" Feb 13 19:30:54.166318 kubelet[2706]: E0213 19:30:54.165942 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:54.220259 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:38894.service - OpenSSH per-connection server daemon (10.0.0.1:38894). Feb 13 19:30:54.251615 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 38894 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:30:54.252747 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:54.256486 systemd-logind[1518]: New session 8 of user core. Feb 13 19:30:54.266123 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:30:54.393285 systemd-networkd[1228]: cilium_host: Link UP Feb 13 19:30:54.393404 systemd-networkd[1228]: cilium_net: Link UP Feb 13 19:30:54.393406 systemd-networkd[1228]: cilium_net: Gained carrier Feb 13 19:30:54.393526 systemd-networkd[1228]: cilium_host: Gained carrier Feb 13 19:30:54.393673 systemd-networkd[1228]: cilium_host: Gained IPv6LL Feb 13 19:30:54.454480 sshd[3554]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:54.458420 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:38894.service: Deactivated successfully. Feb 13 19:30:54.461307 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:30:54.462502 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:30:54.463952 systemd-logind[1518]: Removed session 8. Feb 13 19:30:54.486802 systemd-networkd[1228]: cilium_vxlan: Link UP Feb 13 19:30:54.486807 systemd-networkd[1228]: cilium_vxlan: Gained carrier Feb 13 19:30:54.631051 systemd-networkd[1228]: cilium_net: Gained IPv6LL Feb 13 19:30:54.785927 kernel: NET: Registered PF_ALG protocol family Feb 13 19:30:55.168662 kubelet[2706]: E0213 19:30:55.168632 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:55.359631 systemd-networkd[1228]: lxc_health: Link UP Feb 13 19:30:55.369020 systemd-networkd[1228]: lxc_health: Gained carrier Feb 13 19:30:55.476281 systemd-networkd[1228]: lxcc8a689e67230: Link UP Feb 13 19:30:55.481929 kernel: eth0: renamed from tmpa4a6b Feb 13 19:30:55.507096 systemd-networkd[1228]: lxcc8a689e67230: Gained carrier Feb 13 19:30:55.507966 systemd-networkd[1228]: lxc4d7ffc864439: Link UP Feb 13 19:30:55.516955 kernel: eth0: renamed from tmp8301f Feb 13 19:30:55.525403 systemd-networkd[1228]: lxc4d7ffc864439: Gained carrier Feb 13 19:30:56.174756 kubelet[2706]: E0213 19:30:56.174705 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:56.184329 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL Feb 13 19:30:56.696081 systemd-networkd[1228]: lxcc8a689e67230: Gained IPv6LL Feb 13 19:30:57.016096 systemd-networkd[1228]: lxc4d7ffc864439: Gained IPv6LL Feb 13 19:30:57.174888 kubelet[2706]: E0213 19:30:57.174344 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:57.208086 systemd-networkd[1228]: lxc_health: Gained IPv6LL Feb 13 19:30:58.176144 kubelet[2706]: E0213 19:30:58.176094 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:58.983180 containerd[1536]: time="2025-02-13T19:30:58.983094560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:58.983180 containerd[1536]: time="2025-02-13T19:30:58.983157003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:58.983913 containerd[1536]: time="2025-02-13T19:30:58.983646033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:58.984409 containerd[1536]: time="2025-02-13T19:30:58.984313154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:58.984409 containerd[1536]: time="2025-02-13T19:30:58.984265391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:58.984409 containerd[1536]: time="2025-02-13T19:30:58.984324355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:58.984409 containerd[1536]: time="2025-02-13T19:30:58.984336036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:58.984521 containerd[1536]: time="2025-02-13T19:30:58.984420521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:58.997777 systemd[1]: run-containerd-runc-k8s.io-a4a6bbf7f20d516aa86588939bb044f33cc56055e713cca1a8b243db7d916281-runc.WO6gCX.mount: Deactivated successfully. Feb 13 19:30:59.007758 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:30:59.011127 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:30:59.028382 containerd[1536]: time="2025-02-13T19:30:59.028088866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sdgww,Uid:95827d2d-8ab0-47b4-9d3e-09605adcd13b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4a6bbf7f20d516aa86588939bb044f33cc56055e713cca1a8b243db7d916281\"" Feb 13 19:30:59.029951 kubelet[2706]: E0213 19:30:59.028647 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:59.031915 containerd[1536]: time="2025-02-13T19:30:59.031830249Z" level=info msg="CreateContainer within sandbox \"a4a6bbf7f20d516aa86588939bb044f33cc56055e713cca1a8b243db7d916281\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:30:59.036044 containerd[1536]: time="2025-02-13T19:30:59.036008897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l88nj,Uid:d03bc3ed-3347-4536-8228-40d79700789a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8301f7a60cac3c9b923552131030d41edf2c0f42b62b51859ac30b9790c1f0b4\"" Feb 13 19:30:59.037277 kubelet[2706]: E0213 19:30:59.037097 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:59.040953 containerd[1536]: time="2025-02-13T19:30:59.040923629Z" level=info msg="CreateContainer within sandbox \"8301f7a60cac3c9b923552131030d41edf2c0f42b62b51859ac30b9790c1f0b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:30:59.046482 containerd[1536]: time="2025-02-13T19:30:59.046445637Z" level=info msg="CreateContainer within sandbox \"a4a6bbf7f20d516aa86588939bb044f33cc56055e713cca1a8b243db7d916281\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b50ff14da48e5cae664f81194ee726d59835639c9c8561f7b9b92570b4a42d08\"" Feb 13 19:30:59.046985 containerd[1536]: time="2025-02-13T19:30:59.046864382Z" level=info msg="StartContainer for \"b50ff14da48e5cae664f81194ee726d59835639c9c8561f7b9b92570b4a42d08\"" Feb 13 19:30:59.052264 containerd[1536]: time="2025-02-13T19:30:59.052228020Z" level=info msg="CreateContainer within sandbox \"8301f7a60cac3c9b923552131030d41edf2c0f42b62b51859ac30b9790c1f0b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74b2a277ddd61a8defa4554587732c22ebcd70073d52af6150d5395164da80c8\"" Feb 13 19:30:59.053691 containerd[1536]: time="2025-02-13T19:30:59.053661946Z" level=info msg="StartContainer for \"74b2a277ddd61a8defa4554587732c22ebcd70073d52af6150d5395164da80c8\"" Feb 13 19:30:59.095827 containerd[1536]: time="2025-02-13T19:30:59.095787448Z" level=info msg="StartContainer for \"b50ff14da48e5cae664f81194ee726d59835639c9c8561f7b9b92570b4a42d08\" returns successfully" Feb 13 19:30:59.101103 containerd[1536]: time="2025-02-13T19:30:59.100983717Z" level=info msg="StartContainer for \"74b2a277ddd61a8defa4554587732c22ebcd70073d52af6150d5395164da80c8\" returns successfully" Feb 13 19:30:59.180777 kubelet[2706]: E0213 19:30:59.180468 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:59.189553 kubelet[2706]: E0213 19:30:59.186043 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:59.204959 kubelet[2706]: I0213 19:30:59.204441 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sdgww" podStartSLOduration=16.204423503 podStartE2EDuration="16.204423503s" podCreationTimestamp="2025-02-13 19:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:59.20151885 +0000 UTC m=+32.199639431" watchObservedRunningTime="2025-02-13 19:30:59.204423503 +0000 UTC m=+32.202544124" Feb 13 19:30:59.217306 kubelet[2706]: I0213 19:30:59.217118 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l88nj" podStartSLOduration=16.217102456 podStartE2EDuration="16.217102456s" podCreationTimestamp="2025-02-13 19:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:59.2168286 +0000 UTC m=+32.214949181" watchObservedRunningTime="2025-02-13 19:30:59.217102456 +0000 UTC m=+32.215223037" Feb 13 19:30:59.466147 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:38908.service - OpenSSH per-connection server daemon (10.0.0.1:38908). Feb 13 19:30:59.500494 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 38908 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:30:59.501631 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:59.505334 systemd-logind[1518]: New session 9 of user core. Feb 13 19:30:59.518118 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:30:59.633116 sshd[4119]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:59.636477 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:38908.service: Deactivated successfully. Feb 13 19:30:59.638411 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:30:59.638877 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:30:59.640010 systemd-logind[1518]: Removed session 9. Feb 13 19:30:59.988829 systemd[1]: run-containerd-runc-k8s.io-8301f7a60cac3c9b923552131030d41edf2c0f42b62b51859ac30b9790c1f0b4-runc.csMiBw.mount: Deactivated successfully. Feb 13 19:31:00.188542 kubelet[2706]: E0213 19:31:00.188187 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:00.189640 kubelet[2706]: E0213 19:31:00.188590 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.189443 kubelet[2706]: E0213 19:31:01.189413 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.189804 kubelet[2706]: E0213 19:31:01.189461 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:04.647145 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:43266.service - OpenSSH per-connection server daemon (10.0.0.1:43266). Feb 13 19:31:04.682574 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 43266 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:04.683988 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:04.688905 systemd-logind[1518]: New session 10 of user core. Feb 13 19:31:04.701166 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:31:04.815591 sshd[4140]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:04.819302 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:43266.service: Deactivated successfully. Feb 13 19:31:04.821256 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:31:04.821403 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:31:04.822609 systemd-logind[1518]: Removed session 10. Feb 13 19:31:09.830175 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:43272.service - OpenSSH per-connection server daemon (10.0.0.1:43272). Feb 13 19:31:09.863681 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 43272 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:09.865298 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:09.870787 systemd-logind[1518]: New session 11 of user core. Feb 13 19:31:09.879188 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:31:09.997517 sshd[4158]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:10.008147 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:43278.service - OpenSSH per-connection server daemon (10.0.0.1:43278). Feb 13 19:31:10.008526 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:43272.service: Deactivated successfully. Feb 13 19:31:10.011343 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:31:10.011477 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:31:10.013252 systemd-logind[1518]: Removed session 11. Feb 13 19:31:10.039374 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 43278 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:10.040295 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:10.044633 systemd-logind[1518]: New session 12 of user core. Feb 13 19:31:10.054230 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:31:10.203756 sshd[4171]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:10.214287 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:43292.service - OpenSSH per-connection server daemon (10.0.0.1:43292). Feb 13 19:31:10.214743 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:43278.service: Deactivated successfully. Feb 13 19:31:10.218461 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:31:10.220207 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:31:10.224067 systemd-logind[1518]: Removed session 12. Feb 13 19:31:10.259168 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 43292 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:10.260437 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:10.264616 systemd-logind[1518]: New session 13 of user core. Feb 13 19:31:10.271240 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:31:10.395084 sshd[4184]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:10.400104 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:43292.service: Deactivated successfully. Feb 13 19:31:10.405098 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:31:10.405591 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:31:10.407052 systemd-logind[1518]: Removed session 13. Feb 13 19:31:15.411145 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Feb 13 19:31:15.443522 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:15.444730 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:15.449545 systemd-logind[1518]: New session 14 of user core. Feb 13 19:31:15.462154 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:31:15.571226 sshd[4205]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:15.573652 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:50314.service: Deactivated successfully. Feb 13 19:31:15.576778 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:31:15.577865 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:31:15.579563 systemd-logind[1518]: Removed session 14. Feb 13 19:31:20.587194 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:50330.service - OpenSSH per-connection server daemon (10.0.0.1:50330). Feb 13 19:31:20.616248 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 50330 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:20.617392 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:20.621098 systemd-logind[1518]: New session 15 of user core. Feb 13 19:31:20.630213 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:31:20.736009 sshd[4221]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:20.743278 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Feb 13 19:31:20.743699 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:50330.service: Deactivated successfully. Feb 13 19:31:20.746875 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:31:20.747646 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:31:20.748742 systemd-logind[1518]: Removed session 15. Feb 13 19:31:20.772463 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:20.773679 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:20.777550 systemd-logind[1518]: New session 16 of user core. Feb 13 19:31:20.789149 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:31:20.987618 sshd[4234]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:20.999195 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:50348.service - OpenSSH per-connection server daemon (10.0.0.1:50348). Feb 13 19:31:20.999551 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:50340.service: Deactivated successfully. Feb 13 19:31:21.002090 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:31:21.003040 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:31:21.007174 systemd-logind[1518]: Removed session 16. Feb 13 19:31:21.034629 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 50348 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:21.035984 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:21.039988 systemd-logind[1518]: New session 17 of user core. Feb 13 19:31:21.048259 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:31:22.334050 sshd[4247]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:22.346225 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:50356.service - OpenSSH per-connection server daemon (10.0.0.1:50356). Feb 13 19:31:22.347712 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:50348.service: Deactivated successfully. Feb 13 19:31:22.361778 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:31:22.363285 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:31:22.364909 systemd-logind[1518]: Removed session 17. Feb 13 19:31:22.392532 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 50356 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:22.393810 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:22.400232 systemd-logind[1518]: New session 18 of user core. Feb 13 19:31:22.416226 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:31:22.659219 sshd[4268]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:22.668214 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:44228.service - OpenSSH per-connection server daemon (10.0.0.1:44228). Feb 13 19:31:22.668968 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:50356.service: Deactivated successfully. Feb 13 19:31:22.673226 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:31:22.673309 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:31:22.674583 systemd-logind[1518]: Removed session 18. Feb 13 19:31:22.708540 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 44228 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:22.709869 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:22.715804 systemd-logind[1518]: New session 19 of user core. Feb 13 19:31:22.724171 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:31:22.843585 sshd[4281]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:22.846801 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:44228.service: Deactivated successfully. Feb 13 19:31:22.849133 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:31:22.849254 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:31:22.850795 systemd-logind[1518]: Removed session 19. Feb 13 19:31:27.853206 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:44232.service - OpenSSH per-connection server daemon (10.0.0.1:44232). Feb 13 19:31:27.882635 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 44232 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:27.883762 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:27.887523 systemd-logind[1518]: New session 20 of user core. Feb 13 19:31:27.896106 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:31:28.002004 sshd[4304]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:28.005379 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:44232.service: Deactivated successfully. Feb 13 19:31:28.007946 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:31:28.008921 systemd-logind[1518]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:31:28.009847 systemd-logind[1518]: Removed session 20. Feb 13 19:31:33.013120 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:49908.service - OpenSSH per-connection server daemon (10.0.0.1:49908). Feb 13 19:31:33.042780 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 49908 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:33.043974 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:33.047828 systemd-logind[1518]: New session 21 of user core. Feb 13 19:31:33.063164 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:31:33.170509 sshd[4319]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:33.173729 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:49908.service: Deactivated successfully. Feb 13 19:31:33.176638 systemd-logind[1518]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:31:33.176756 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:31:33.178040 systemd-logind[1518]: Removed session 21. Feb 13 19:31:37.081025 kubelet[2706]: E0213 19:31:37.080889 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:38.189164 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Feb 13 19:31:38.218621 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:38.219855 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:38.223344 systemd-logind[1518]: New session 22 of user core. Feb 13 19:31:38.233136 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:31:38.341282 sshd[4334]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:38.356212 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:49920.service - OpenSSH per-connection server daemon (10.0.0.1:49920). Feb 13 19:31:38.356629 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:49918.service: Deactivated successfully. Feb 13 19:31:38.358375 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:31:38.360139 systemd-logind[1518]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:31:38.361112 systemd-logind[1518]: Removed session 22. Feb 13 19:31:38.386790 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 49920 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:38.388043 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:38.391866 systemd-logind[1518]: New session 23 of user core. Feb 13 19:31:38.400221 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:31:40.298901 containerd[1536]: time="2025-02-13T19:31:40.298835827Z" level=info msg="StopContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" with timeout 30 (s)" Feb 13 19:31:40.299769 containerd[1536]: time="2025-02-13T19:31:40.299725973Z" level=info msg="Stop container \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" with signal terminated" Feb 13 19:31:40.328279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a-rootfs.mount: Deactivated successfully. Feb 13 19:31:40.335556 containerd[1536]: time="2025-02-13T19:31:40.335425075Z" level=info msg="shim disconnected" id=3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a namespace=k8s.io Feb 13 19:31:40.335556 containerd[1536]: time="2025-02-13T19:31:40.335473154Z" level=warning msg="cleaning up after shim disconnected" id=3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a namespace=k8s.io Feb 13 19:31:40.335556 containerd[1536]: time="2025-02-13T19:31:40.335484434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:40.336858 containerd[1536]: time="2025-02-13T19:31:40.336744855Z" level=info msg="StopContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" with timeout 2 (s)" Feb 13 19:31:40.337146 containerd[1536]: time="2025-02-13T19:31:40.337103249Z" level=info msg="Stop container \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" with signal terminated" Feb 13 19:31:40.342488 systemd-networkd[1228]: lxc_health: Link DOWN Feb 13 19:31:40.342929 systemd-networkd[1228]: lxc_health: Lost carrier Feb 13 19:31:40.358495 containerd[1536]: time="2025-02-13T19:31:40.358444488Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:31:40.386039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838-rootfs.mount: Deactivated successfully. Feb 13 19:31:40.390803 containerd[1536]: time="2025-02-13T19:31:40.390671722Z" level=info msg="StopContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" returns successfully" Feb 13 19:31:40.391201 containerd[1536]: time="2025-02-13T19:31:40.391156274Z" level=info msg="shim disconnected" id=14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838 namespace=k8s.io Feb 13 19:31:40.391201 containerd[1536]: time="2025-02-13T19:31:40.391201074Z" level=warning msg="cleaning up after shim disconnected" id=14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838 namespace=k8s.io Feb 13 19:31:40.391274 containerd[1536]: time="2025-02-13T19:31:40.391209154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:40.392207 containerd[1536]: time="2025-02-13T19:31:40.392178899Z" level=info msg="StopPodSandbox for \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\"" Feb 13 19:31:40.392261 containerd[1536]: time="2025-02-13T19:31:40.392220538Z" level=info msg="Container to stop \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.394706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f-shm.mount: Deactivated successfully. Feb 13 19:31:40.409254 containerd[1536]: time="2025-02-13T19:31:40.408993085Z" level=info msg="StopContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" returns successfully" Feb 13 19:31:40.409785 containerd[1536]: time="2025-02-13T19:31:40.409759474Z" level=info msg="StopPodSandbox for \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\"" Feb 13 19:31:40.409844 containerd[1536]: time="2025-02-13T19:31:40.409802393Z" level=info msg="Container to stop \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.409844 containerd[1536]: time="2025-02-13T19:31:40.409823273Z" level=info msg="Container to stop \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.409844 containerd[1536]: time="2025-02-13T19:31:40.409834033Z" level=info msg="Container to stop \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.409982 containerd[1536]: time="2025-02-13T19:31:40.409843552Z" level=info msg="Container to stop \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.409982 containerd[1536]: time="2025-02-13T19:31:40.409853992Z" level=info msg="Container to stop \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:31:40.411574 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a-shm.mount: Deactivated successfully. Feb 13 19:31:40.428167 containerd[1536]: time="2025-02-13T19:31:40.427349128Z" level=info msg="shim disconnected" id=8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f namespace=k8s.io Feb 13 19:31:40.428167 containerd[1536]: time="2025-02-13T19:31:40.427402448Z" level=warning msg="cleaning up after shim disconnected" id=8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f namespace=k8s.io Feb 13 19:31:40.428167 containerd[1536]: time="2025-02-13T19:31:40.427412408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:40.434265 containerd[1536]: time="2025-02-13T19:31:40.433862710Z" level=info msg="shim disconnected" id=172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a namespace=k8s.io Feb 13 19:31:40.434454 containerd[1536]: time="2025-02-13T19:31:40.434432222Z" level=warning msg="cleaning up after shim disconnected" id=172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a namespace=k8s.io Feb 13 19:31:40.434454 containerd[1536]: time="2025-02-13T19:31:40.434450661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:40.446087 containerd[1536]: time="2025-02-13T19:31:40.446043487Z" level=info msg="TearDown network for sandbox \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\" successfully" Feb 13 19:31:40.446087 containerd[1536]: time="2025-02-13T19:31:40.446080566Z" level=info msg="StopPodSandbox for \"8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f\" returns successfully" Feb 13 19:31:40.457565 containerd[1536]: time="2025-02-13T19:31:40.457469834Z" level=info msg="TearDown network for sandbox \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" successfully" Feb 13 19:31:40.457565 containerd[1536]: time="2025-02-13T19:31:40.457505874Z" level=info msg="StopPodSandbox for \"172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a\" returns successfully" Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610306 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-xtables-lock\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610351 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/901727a9-745c-4caa-b25a-e6bcd4f54167-clustermesh-secrets\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610371 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-cgroup\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610391 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-run\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610406 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-lib-modules\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.610430 kubelet[2706]: I0213 19:31:40.610420 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-hostproc\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610435 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-bpf-maps\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610448 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cni-path\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610462 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-net\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610480 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-config-path\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610501 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sclj\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-kube-api-access-4sclj\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611015 kubelet[2706]: I0213 19:31:40.610519 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-etc-cni-netd\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611151 kubelet[2706]: I0213 19:31:40.610535 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-kernel\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611151 kubelet[2706]: I0213 19:31:40.610550 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-hubble-tls\") pod \"901727a9-745c-4caa-b25a-e6bcd4f54167\" (UID: \"901727a9-745c-4caa-b25a-e6bcd4f54167\") " Feb 13 19:31:40.611151 kubelet[2706]: I0213 19:31:40.610568 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-cilium-config-path\") pod \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\" (UID: \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\") " Feb 13 19:31:40.611151 kubelet[2706]: I0213 19:31:40.610584 2706 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2lvx\" (UniqueName: \"kubernetes.io/projected/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-kube-api-access-g2lvx\") pod \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\" (UID: \"fde9aa7b-8a80-4afb-8a52-9eb79ca4771d\") " Feb 13 19:31:40.616400 kubelet[2706]: I0213 19:31:40.616357 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.616455 kubelet[2706]: I0213 19:31:40.616436 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.623237 kubelet[2706]: I0213 19:31:40.623200 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624649 kubelet[2706]: I0213 19:31:40.623744 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:31:40.624649 kubelet[2706]: I0213 19:31:40.623794 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-hostproc" (OuterVolumeSpecName: "hostproc") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624649 kubelet[2706]: I0213 19:31:40.623806 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624649 kubelet[2706]: I0213 19:31:40.623819 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cni-path" (OuterVolumeSpecName: "cni-path") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624649 kubelet[2706]: I0213 19:31:40.623829 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624825 kubelet[2706]: I0213 19:31:40.623844 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624825 kubelet[2706]: I0213 19:31:40.624403 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.624825 kubelet[2706]: I0213 19:31:40.624426 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:31:40.625543 kubelet[2706]: I0213 19:31:40.625514 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/901727a9-745c-4caa-b25a-e6bcd4f54167-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:31:40.627311 kubelet[2706]: I0213 19:31:40.627279 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" (UID: "fde9aa7b-8a80-4afb-8a52-9eb79ca4771d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:31:40.627477 kubelet[2706]: I0213 19:31:40.627454 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:31:40.627561 kubelet[2706]: I0213 19:31:40.627486 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-kube-api-access-4sclj" (OuterVolumeSpecName: "kube-api-access-4sclj") pod "901727a9-745c-4caa-b25a-e6bcd4f54167" (UID: "901727a9-745c-4caa-b25a-e6bcd4f54167"). InnerVolumeSpecName "kube-api-access-4sclj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:31:40.630589 kubelet[2706]: I0213 19:31:40.630562 2706 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-kube-api-access-g2lvx" (OuterVolumeSpecName: "kube-api-access-g2lvx") pod "fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" (UID: "fde9aa7b-8a80-4afb-8a52-9eb79ca4771d"). InnerVolumeSpecName "kube-api-access-g2lvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:31:40.710779 kubelet[2706]: I0213 19:31:40.710725 2706 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710779 kubelet[2706]: I0213 19:31:40.710770 2706 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710779 kubelet[2706]: I0213 19:31:40.710789 2706 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710804 2706 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710819 2706 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g2lvx\" (UniqueName: \"kubernetes.io/projected/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d-kube-api-access-g2lvx\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710834 2706 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710844 2706 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/901727a9-745c-4caa-b25a-e6bcd4f54167-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710851 2706 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710858 2706 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710865 2706 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.710970 kubelet[2706]: I0213 19:31:40.710873 2706 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.711138 kubelet[2706]: I0213 19:31:40.710880 2706 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.711138 kubelet[2706]: I0213 19:31:40.710887 2706 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.711138 kubelet[2706]: I0213 19:31:40.710912 2706 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/901727a9-745c-4caa-b25a-e6bcd4f54167-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.711138 kubelet[2706]: I0213 19:31:40.710921 2706 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/901727a9-745c-4caa-b25a-e6bcd4f54167-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:40.711138 kubelet[2706]: I0213 19:31:40.710928 2706 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4sclj\" (UniqueName: \"kubernetes.io/projected/901727a9-745c-4caa-b25a-e6bcd4f54167-kube-api-access-4sclj\") on node \"localhost\" DevicePath \"\"" Feb 13 19:31:41.315578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c9e9edabfcc0c936604a7d53265313185e757a0b56fb2d8803551c95b89733f-rootfs.mount: Deactivated successfully. Feb 13 19:31:41.315733 systemd[1]: var-lib-kubelet-pods-fde9aa7b\x2d8a80\x2d4afb\x2d8a52\x2d9eb79ca4771d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg2lvx.mount: Deactivated successfully. Feb 13 19:31:41.315832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-172e373f5b750d1158d87d6ce9b0016880851a1b0ba0cf266a03adc4e4cc4a6a-rootfs.mount: Deactivated successfully. Feb 13 19:31:41.315930 systemd[1]: var-lib-kubelet-pods-901727a9\x2d745c\x2d4caa\x2db25a\x2de6bcd4f54167-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4sclj.mount: Deactivated successfully. Feb 13 19:31:41.316032 systemd[1]: var-lib-kubelet-pods-901727a9\x2d745c\x2d4caa\x2db25a\x2de6bcd4f54167-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:31:41.316112 systemd[1]: var-lib-kubelet-pods-901727a9\x2d745c\x2d4caa\x2db25a\x2de6bcd4f54167-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:31:41.321534 kubelet[2706]: I0213 19:31:41.321506 2706 scope.go:117] "RemoveContainer" containerID="3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a" Feb 13 19:31:41.323500 containerd[1536]: time="2025-02-13T19:31:41.322947457Z" level=info msg="RemoveContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\"" Feb 13 19:31:41.407850 containerd[1536]: time="2025-02-13T19:31:41.407635005Z" level=info msg="RemoveContainer for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" returns successfully" Feb 13 19:31:41.408567 kubelet[2706]: I0213 19:31:41.408530 2706 scope.go:117] "RemoveContainer" containerID="3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a" Feb 13 19:31:41.409074 containerd[1536]: time="2025-02-13T19:31:41.408866028Z" level=error msg="ContainerStatus for \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\": not found" Feb 13 19:31:41.416484 kubelet[2706]: E0213 19:31:41.416437 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\": not found" containerID="3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a" Feb 13 19:31:41.416568 kubelet[2706]: I0213 19:31:41.416478 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a"} err="failed to get container status \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3061edefad7db3f67f2fcbe9203720a2acddd0a1d63d52535cfd5c00af9c623a\": not found" Feb 13 19:31:41.416595 kubelet[2706]: I0213 19:31:41.416570 2706 scope.go:117] "RemoveContainer" containerID="14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838" Feb 13 19:31:41.417823 containerd[1536]: time="2025-02-13T19:31:41.417568427Z" level=info msg="RemoveContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\"" Feb 13 19:31:41.443341 containerd[1536]: time="2025-02-13T19:31:41.443306711Z" level=info msg="RemoveContainer for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" returns successfully" Feb 13 19:31:41.443629 kubelet[2706]: I0213 19:31:41.443582 2706 scope.go:117] "RemoveContainer" containerID="e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6" Feb 13 19:31:41.444545 containerd[1536]: time="2025-02-13T19:31:41.444520454Z" level=info msg="RemoveContainer for \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\"" Feb 13 19:31:41.471339 containerd[1536]: time="2025-02-13T19:31:41.471292604Z" level=info msg="RemoveContainer for \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\" returns successfully" Feb 13 19:31:41.471537 kubelet[2706]: I0213 19:31:41.471502 2706 scope.go:117] "RemoveContainer" containerID="5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f" Feb 13 19:31:41.472651 containerd[1536]: time="2025-02-13T19:31:41.472618745Z" level=info msg="RemoveContainer for \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\"" Feb 13 19:31:41.474944 containerd[1536]: time="2025-02-13T19:31:41.474885474Z" level=info msg="RemoveContainer for \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\" returns successfully" Feb 13 19:31:41.475136 kubelet[2706]: I0213 19:31:41.475108 2706 scope.go:117] "RemoveContainer" containerID="ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a" Feb 13 19:31:41.476258 containerd[1536]: time="2025-02-13T19:31:41.476020178Z" level=info msg="RemoveContainer for \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\"" Feb 13 19:31:41.478261 containerd[1536]: time="2025-02-13T19:31:41.478174068Z" level=info msg="RemoveContainer for \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\" returns successfully" Feb 13 19:31:41.478440 kubelet[2706]: I0213 19:31:41.478420 2706 scope.go:117] "RemoveContainer" containerID="dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5" Feb 13 19:31:41.479707 containerd[1536]: time="2025-02-13T19:31:41.479476690Z" level=info msg="RemoveContainer for \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\"" Feb 13 19:31:41.481488 containerd[1536]: time="2025-02-13T19:31:41.481402064Z" level=info msg="RemoveContainer for \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\" returns successfully" Feb 13 19:31:41.481592 kubelet[2706]: I0213 19:31:41.481568 2706 scope.go:117] "RemoveContainer" containerID="14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838" Feb 13 19:31:41.481797 containerd[1536]: time="2025-02-13T19:31:41.481753219Z" level=error msg="ContainerStatus for \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\": not found" Feb 13 19:31:41.481921 kubelet[2706]: E0213 19:31:41.481882 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\": not found" containerID="14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838" Feb 13 19:31:41.481970 kubelet[2706]: I0213 19:31:41.481930 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838"} err="failed to get container status \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\": rpc error: code = NotFound desc = an error occurred when try to find container \"14b80ae0541d5f5e88a046c2e3b60a3eb3125edcad655967170e27d758677838\": not found" Feb 13 19:31:41.481970 kubelet[2706]: I0213 19:31:41.481959 2706 scope.go:117] "RemoveContainer" containerID="e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6" Feb 13 19:31:41.482166 containerd[1536]: time="2025-02-13T19:31:41.482133974Z" level=error msg="ContainerStatus for \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\": not found" Feb 13 19:31:41.482288 kubelet[2706]: E0213 19:31:41.482268 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\": not found" containerID="e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6" Feb 13 19:31:41.482321 kubelet[2706]: I0213 19:31:41.482295 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6"} err="failed to get container status \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e63605b0bcdddd5820ef0fd74c00f196eadc17e6f94a329719cb2d14235493f6\": not found" Feb 13 19:31:41.482321 kubelet[2706]: I0213 19:31:41.482311 2706 scope.go:117] "RemoveContainer" containerID="5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f" Feb 13 19:31:41.482566 containerd[1536]: time="2025-02-13T19:31:41.482496969Z" level=error msg="ContainerStatus for \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\": not found" Feb 13 19:31:41.482641 kubelet[2706]: E0213 19:31:41.482617 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\": not found" containerID="5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f" Feb 13 19:31:41.482669 kubelet[2706]: I0213 19:31:41.482642 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f"} err="failed to get container status \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5733087a5183ec1cb89a0511620aea810bc88511d4726b3c00ff902dd922276f\": not found" Feb 13 19:31:41.482669 kubelet[2706]: I0213 19:31:41.482658 2706 scope.go:117] "RemoveContainer" containerID="ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a" Feb 13 19:31:41.482875 containerd[1536]: time="2025-02-13T19:31:41.482838044Z" level=error msg="ContainerStatus for \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\": not found" Feb 13 19:31:41.483020 kubelet[2706]: E0213 19:31:41.482997 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\": not found" containerID="ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a" Feb 13 19:31:41.483062 kubelet[2706]: I0213 19:31:41.483024 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a"} err="failed to get container status \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab3024641771f4b949a2ae172a2edd5221961b904557208aa685410b2de6089a\": not found" Feb 13 19:31:41.483062 kubelet[2706]: I0213 19:31:41.483040 2706 scope.go:117] "RemoveContainer" containerID="dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5" Feb 13 19:31:41.483280 containerd[1536]: time="2025-02-13T19:31:41.483214839Z" level=error msg="ContainerStatus for \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\": not found" Feb 13 19:31:41.483403 kubelet[2706]: E0213 19:31:41.483383 2706 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\": not found" containerID="dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5" Feb 13 19:31:41.483440 kubelet[2706]: I0213 19:31:41.483405 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5"} err="failed to get container status \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd3664e2e0ba4024ba79128dc3ecb10e6fbb0fec3ce2c3a8f673cca5cbd927e5\": not found" Feb 13 19:31:42.142239 kubelet[2706]: E0213 19:31:42.142194 2706 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:31:42.263732 sshd[4347]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:42.279156 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:49926.service - OpenSSH per-connection server daemon (10.0.0.1:49926). Feb 13 19:31:42.279535 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:49920.service: Deactivated successfully. Feb 13 19:31:42.282462 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:31:42.282951 systemd-logind[1518]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:31:42.284626 systemd-logind[1518]: Removed session 23. Feb 13 19:31:42.309104 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:42.310285 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:42.313927 systemd-logind[1518]: New session 24 of user core. Feb 13 19:31:42.321115 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:31:43.082728 kubelet[2706]: I0213 19:31:43.082681 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" path="/var/lib/kubelet/pods/901727a9-745c-4caa-b25a-e6bcd4f54167/volumes" Feb 13 19:31:43.083279 kubelet[2706]: I0213 19:31:43.083251 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" path="/var/lib/kubelet/pods/fde9aa7b-8a80-4afb-8a52-9eb79ca4771d/volumes" Feb 13 19:31:43.438220 sshd[4514]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:43.446431 systemd[1]: Started sshd@24-10.0.0.31:22-10.0.0.1:47820.service - OpenSSH per-connection server daemon (10.0.0.1:47820). Feb 13 19:31:43.446817 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:49926.service: Deactivated successfully. Feb 13 19:31:43.452884 systemd-logind[1518]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:31:43.455815 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:31:43.455980 kubelet[2706]: I0213 19:31:43.455851 2706 topology_manager.go:215] "Topology Admit Handler" podUID="e6adf2f9-fd62-4906-b596-ab0bc984e976" podNamespace="kube-system" podName="cilium-4ffqz" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455925 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="mount-cgroup" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455935 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" containerName="cilium-operator" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455941 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="mount-bpf-fs" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455946 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="cilium-agent" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455953 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="clean-cilium-state" Feb 13 19:31:43.455980 kubelet[2706]: E0213 19:31:43.455959 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="apply-sysctl-overwrites" Feb 13 19:31:43.455980 kubelet[2706]: I0213 19:31:43.455980 2706 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde9aa7b-8a80-4afb-8a52-9eb79ca4771d" containerName="cilium-operator" Feb 13 19:31:43.455980 kubelet[2706]: I0213 19:31:43.455986 2706 memory_manager.go:354] "RemoveStaleState removing state" podUID="901727a9-745c-4caa-b25a-e6bcd4f54167" containerName="cilium-agent" Feb 13 19:31:43.462179 systemd-logind[1518]: Removed session 24. Feb 13 19:31:43.506661 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 47820 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:43.509433 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:43.513867 systemd-logind[1518]: New session 25 of user core. Feb 13 19:31:43.522201 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:31:43.572507 sshd[4530]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:43.585357 systemd[1]: Started sshd@25-10.0.0.31:22-10.0.0.1:47830.service - OpenSSH per-connection server daemon (10.0.0.1:47830). Feb 13 19:31:43.585741 systemd[1]: sshd@24-10.0.0.31:22-10.0.0.1:47820.service: Deactivated successfully. Feb 13 19:31:43.587962 systemd-logind[1518]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:31:43.588602 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:31:43.590051 systemd-logind[1518]: Removed session 25. Feb 13 19:31:43.615920 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 47830 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:31:43.616535 sshd[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:43.620275 systemd-logind[1518]: New session 26 of user core. Feb 13 19:31:43.627463 kubelet[2706]: I0213 19:31:43.627426 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-cilium-run\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.627604 kubelet[2706]: I0213 19:31:43.627587 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-cilium-cgroup\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.627713 kubelet[2706]: I0213 19:31:43.627699 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6adf2f9-fd62-4906-b596-ab0bc984e976-cilium-config-path\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627783 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-lib-modules\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627809 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6adf2f9-fd62-4906-b596-ab0bc984e976-hubble-tls\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627828 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-bpf-maps\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627849 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-etc-cni-netd\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627870 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-xtables-lock\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628075 kubelet[2706]: I0213 19:31:43.627885 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-host-proc-sys-net\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628238 kubelet[2706]: I0213 19:31:43.627940 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6adf2f9-fd62-4906-b596-ab0bc984e976-clustermesh-secrets\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628238 kubelet[2706]: I0213 19:31:43.627959 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f57p\" (UniqueName: \"kubernetes.io/projected/e6adf2f9-fd62-4906-b596-ab0bc984e976-kube-api-access-5f57p\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628238 kubelet[2706]: I0213 19:31:43.627977 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-hostproc\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628238 kubelet[2706]: I0213 19:31:43.627991 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-cni-path\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628238 kubelet[2706]: I0213 19:31:43.628006 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6adf2f9-fd62-4906-b596-ab0bc984e976-cilium-ipsec-secrets\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.628339 kubelet[2706]: I0213 19:31:43.628026 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6adf2f9-fd62-4906-b596-ab0bc984e976-host-proc-sys-kernel\") pod \"cilium-4ffqz\" (UID: \"e6adf2f9-fd62-4906-b596-ab0bc984e976\") " pod="kube-system/cilium-4ffqz" Feb 13 19:31:43.631137 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:31:43.770859 kubelet[2706]: E0213 19:31:43.770741 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:43.772038 containerd[1536]: time="2025-02-13T19:31:43.771885622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4ffqz,Uid:e6adf2f9-fd62-4906-b596-ab0bc984e976,Namespace:kube-system,Attempt:0,}" Feb 13 19:31:43.804350 containerd[1536]: time="2025-02-13T19:31:43.803766816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:43.804350 containerd[1536]: time="2025-02-13T19:31:43.803830335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:43.804350 containerd[1536]: time="2025-02-13T19:31:43.803842015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:43.804350 containerd[1536]: time="2025-02-13T19:31:43.803953494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:43.834211 containerd[1536]: time="2025-02-13T19:31:43.834171467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4ffqz,Uid:e6adf2f9-fd62-4906-b596-ab0bc984e976,Namespace:kube-system,Attempt:0,} returns sandbox id \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\"" Feb 13 19:31:43.834834 kubelet[2706]: E0213 19:31:43.834807 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:43.837354 containerd[1536]: time="2025-02-13T19:31:43.837260111Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:31:43.849296 containerd[1536]: time="2025-02-13T19:31:43.849245334Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92a3c928dce6f81169fcb098d16b6b45d7951b1e99e44335fa299ccdbb98772f\"" Feb 13 19:31:43.850149 containerd[1536]: time="2025-02-13T19:31:43.850119884Z" level=info msg="StartContainer for \"92a3c928dce6f81169fcb098d16b6b45d7951b1e99e44335fa299ccdbb98772f\"" Feb 13 19:31:43.893818 containerd[1536]: time="2025-02-13T19:31:43.893695024Z" level=info msg="StartContainer for \"92a3c928dce6f81169fcb098d16b6b45d7951b1e99e44335fa299ccdbb98772f\" returns successfully" Feb 13 19:31:43.936780 containerd[1536]: time="2025-02-13T19:31:43.936720730Z" level=info msg="shim disconnected" id=92a3c928dce6f81169fcb098d16b6b45d7951b1e99e44335fa299ccdbb98772f namespace=k8s.io Feb 13 19:31:43.937116 containerd[1536]: time="2025-02-13T19:31:43.937092806Z" level=warning msg="cleaning up after shim disconnected" id=92a3c928dce6f81169fcb098d16b6b45d7951b1e99e44335fa299ccdbb98772f namespace=k8s.io Feb 13 19:31:43.937176 containerd[1536]: time="2025-02-13T19:31:43.937164165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:44.332359 kubelet[2706]: E0213 19:31:44.332328 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:44.334646 containerd[1536]: time="2025-02-13T19:31:44.334333180Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:31:44.342421 containerd[1536]: time="2025-02-13T19:31:44.342371297Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a508157ddf3090bff346d3b8eabfbed0739f9c725b046206b13b52a24ac3d8b4\"" Feb 13 19:31:44.343035 containerd[1536]: time="2025-02-13T19:31:44.343004490Z" level=info msg="StartContainer for \"a508157ddf3090bff346d3b8eabfbed0739f9c725b046206b13b52a24ac3d8b4\"" Feb 13 19:31:44.395277 containerd[1536]: time="2025-02-13T19:31:44.395222670Z" level=info msg="StartContainer for \"a508157ddf3090bff346d3b8eabfbed0739f9c725b046206b13b52a24ac3d8b4\" returns successfully" Feb 13 19:31:44.417079 containerd[1536]: time="2025-02-13T19:31:44.416885926Z" level=info msg="shim disconnected" id=a508157ddf3090bff346d3b8eabfbed0739f9c725b046206b13b52a24ac3d8b4 namespace=k8s.io Feb 13 19:31:44.417079 containerd[1536]: time="2025-02-13T19:31:44.416993965Z" level=warning msg="cleaning up after shim disconnected" id=a508157ddf3090bff346d3b8eabfbed0739f9c725b046206b13b52a24ac3d8b4 namespace=k8s.io Feb 13 19:31:44.417079 containerd[1536]: time="2025-02-13T19:31:44.417003204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:45.336266 kubelet[2706]: E0213 19:31:45.336226 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:45.339968 containerd[1536]: time="2025-02-13T19:31:45.339339745Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:31:45.375011 containerd[1536]: time="2025-02-13T19:31:45.374955176Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69\"" Feb 13 19:31:45.375474 containerd[1536]: time="2025-02-13T19:31:45.375447851Z" level=info msg="StartContainer for \"53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69\"" Feb 13 19:31:45.455121 containerd[1536]: time="2025-02-13T19:31:45.455071234Z" level=info msg="StartContainer for \"53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69\" returns successfully" Feb 13 19:31:45.479341 containerd[1536]: time="2025-02-13T19:31:45.479286170Z" level=info msg="shim disconnected" id=53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69 namespace=k8s.io Feb 13 19:31:45.479341 containerd[1536]: time="2025-02-13T19:31:45.479338850Z" level=warning msg="cleaning up after shim disconnected" id=53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69 namespace=k8s.io Feb 13 19:31:45.479587 containerd[1536]: time="2025-02-13T19:31:45.479349409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:45.733665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53f9c1b373557690aee965b1e2ec47c25abdc343ea2e71936412bce5e396cc69-rootfs.mount: Deactivated successfully. Feb 13 19:31:46.340527 kubelet[2706]: E0213 19:31:46.340477 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:46.344053 containerd[1536]: time="2025-02-13T19:31:46.344002486Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:31:46.355882 containerd[1536]: time="2025-02-13T19:31:46.355831269Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925\"" Feb 13 19:31:46.357092 containerd[1536]: time="2025-02-13T19:31:46.357059779Z" level=info msg="StartContainer for \"d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925\"" Feb 13 19:31:46.400635 containerd[1536]: time="2025-02-13T19:31:46.400598782Z" level=info msg="StartContainer for \"d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925\" returns successfully" Feb 13 19:31:46.418279 containerd[1536]: time="2025-02-13T19:31:46.418225677Z" level=info msg="shim disconnected" id=d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925 namespace=k8s.io Feb 13 19:31:46.418279 containerd[1536]: time="2025-02-13T19:31:46.418274837Z" level=warning msg="cleaning up after shim disconnected" id=d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925 namespace=k8s.io Feb 13 19:31:46.418279 containerd[1536]: time="2025-02-13T19:31:46.418283917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:46.733723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d331d5fee3718b8d880a5ce2c317f067d03cb3aff05003a3d5dda8bd28183925-rootfs.mount: Deactivated successfully. Feb 13 19:31:47.143414 kubelet[2706]: E0213 19:31:47.143373 2706 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:31:47.346920 kubelet[2706]: E0213 19:31:47.346785 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:47.351627 containerd[1536]: time="2025-02-13T19:31:47.350904470Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:31:47.363260 containerd[1536]: time="2025-02-13T19:31:47.363196541Z" level=info msg="CreateContainer within sandbox \"690fdfd8c2b2aaaf416ef10586963ea206ebe840de005e4aa6200ab5f24aa788\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4d6f627b11058cc1c5c63e76e8f5c114adb21a78b5cf81c1d6390d7f2ae6cf9\"" Feb 13 19:31:47.365009 containerd[1536]: time="2025-02-13T19:31:47.364057135Z" level=info msg="StartContainer for \"e4d6f627b11058cc1c5c63e76e8f5c114adb21a78b5cf81c1d6390d7f2ae6cf9\"" Feb 13 19:31:47.411667 containerd[1536]: time="2025-02-13T19:31:47.411453835Z" level=info msg="StartContainer for \"e4d6f627b11058cc1c5c63e76e8f5c114adb21a78b5cf81c1d6390d7f2ae6cf9\" returns successfully" Feb 13 19:31:47.670582 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:31:48.355583 kubelet[2706]: E0213 19:31:48.355552 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:48.370466 kubelet[2706]: I0213 19:31:48.370398 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4ffqz" podStartSLOduration=5.370384321 podStartE2EDuration="5.370384321s" podCreationTimestamp="2025-02-13 19:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:31:48.369941764 +0000 UTC m=+81.368062425" watchObservedRunningTime="2025-02-13 19:31:48.370384321 +0000 UTC m=+81.368504862" Feb 13 19:31:48.732016 kubelet[2706]: I0213 19:31:48.731844 2706 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:31:48Z","lastTransitionTime":"2025-02-13T19:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:31:49.772721 kubelet[2706]: E0213 19:31:49.772601 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:50.352321 systemd-networkd[1228]: lxc_health: Link UP Feb 13 19:31:50.357480 systemd-networkd[1228]: lxc_health: Gained carrier Feb 13 19:31:51.773926 kubelet[2706]: E0213 19:31:51.773869 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:51.991042 systemd-networkd[1228]: lxc_health: Gained IPv6LL Feb 13 19:31:52.367010 kubelet[2706]: E0213 19:31:52.366715 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:56.363907 sshd[4539]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:56.367209 systemd[1]: sshd@25-10.0.0.31:22-10.0.0.1:47830.service: Deactivated successfully. Feb 13 19:31:56.369284 systemd-logind[1518]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:31:56.369365 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:31:56.371338 systemd-logind[1518]: Removed session 26.