Feb 13 19:55:32.881206 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:55:32.881226 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:55:32.881236 kernel: KASLR enabled Feb 13 19:55:32.881241 kernel: efi: EFI v2.7 by EDK II Feb 13 19:55:32.881247 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:55:32.881253 kernel: random: crng init done Feb 13 19:55:32.881259 kernel: ACPI: Early table checksum verification disabled Feb 13 19:55:32.881265 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:55:32.881271 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:55:32.881279 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881285 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881291 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881296 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881302 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881310 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881317 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881323 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881330 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:55:32.881336 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:55:32.881342 kernel: NUMA: Failed to initialise from firmware Feb 13 19:55:32.881349 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:55:32.881355 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:55:32.881361 kernel: Zone ranges: Feb 13 19:55:32.881367 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:55:32.881373 kernel: DMA32 empty Feb 13 19:55:32.881381 kernel: Normal empty Feb 13 19:55:32.881387 kernel: Movable zone start for each node Feb 13 19:55:32.881393 kernel: Early memory node ranges Feb 13 19:55:32.881399 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:55:32.881406 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:55:32.881412 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:55:32.881418 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:55:32.881424 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:55:32.881431 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:55:32.881437 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:55:32.881443 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:55:32.881449 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:55:32.881457 kernel: psci: probing for conduit method from ACPI. Feb 13 19:55:32.881463 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:55:32.881470 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:55:32.881478 kernel: psci: Trusted OS migration not required Feb 13 19:55:32.881485 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:55:32.881492 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:55:32.881500 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:55:32.881507 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:55:32.881514 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:55:32.881520 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:55:32.881527 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:55:32.881534 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:55:32.881540 kernel: CPU features: detected: Spectre-v4 Feb 13 19:55:32.881547 kernel: CPU features: detected: Spectre-BHB Feb 13 19:55:32.881554 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:55:32.881560 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:55:32.881568 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:55:32.881575 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:55:32.881582 kernel: alternatives: applying boot alternatives Feb 13 19:55:32.881589 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:55:32.881596 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:55:32.881603 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:55:32.881610 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:55:32.881617 kernel: Fallback order for Node 0: 0 Feb 13 19:55:32.881623 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:55:32.881630 kernel: Policy zone: DMA Feb 13 19:55:32.881636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:55:32.881644 kernel: software IO TLB: area num 4. Feb 13 19:55:32.881651 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:55:32.881658 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 19:55:32.881665 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:55:32.881672 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:55:32.881679 kernel: rcu: RCU event tracing is enabled. Feb 13 19:55:32.881686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:55:32.881692 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:55:32.881699 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:55:32.881706 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:55:32.881713 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:55:32.881720 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:55:32.881728 kernel: GICv3: 256 SPIs implemented Feb 13 19:55:32.881735 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:55:32.881741 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:55:32.881748 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:55:32.881755 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:55:32.881761 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:55:32.881768 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:55:32.881775 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:55:32.881782 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:55:32.881789 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:55:32.881796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:55:32.881804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:55:32.881811 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:55:32.881818 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:55:32.881825 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:55:32.881831 kernel: arm-pv: using stolen time PV Feb 13 19:55:32.881838 kernel: Console: colour dummy device 80x25 Feb 13 19:55:32.881852 kernel: ACPI: Core revision 20230628 Feb 13 19:55:32.881859 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:55:32.881866 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:55:32.881873 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:55:32.881882 kernel: landlock: Up and running. Feb 13 19:55:32.881889 kernel: SELinux: Initializing. Feb 13 19:55:32.881895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:55:32.881902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:55:32.881909 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:55:32.881916 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:55:32.881923 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:55:32.881930 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:55:32.881937 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:55:32.881945 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:55:32.881952 kernel: Remapping and enabling EFI services. Feb 13 19:55:32.881959 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:55:32.881966 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:55:32.881973 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:55:32.881980 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:55:32.881987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:55:32.881994 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:55:32.882001 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:55:32.882008 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:55:32.882016 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:55:32.882023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:55:32.882034 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:55:32.882042 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:55:32.882050 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:55:32.882057 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:55:32.882064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:55:32.882071 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:55:32.882079 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:55:32.882087 kernel: SMP: Total of 4 processors activated. Feb 13 19:55:32.882094 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:55:32.882102 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:55:32.882109 kernel: CPU features: detected: Common not Private translations Feb 13 19:55:32.882116 kernel: CPU features: detected: CRC32 instructions Feb 13 19:55:32.882123 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:55:32.882131 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:55:32.882138 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:55:32.882146 kernel: CPU features: detected: Privileged Access Never Feb 13 19:55:32.882154 kernel: CPU features: detected: RAS Extension Support Feb 13 19:55:32.882161 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:55:32.882168 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:55:32.882176 kernel: alternatives: applying system-wide alternatives Feb 13 19:55:32.882234 kernel: devtmpfs: initialized Feb 13 19:55:32.882243 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:55:32.882251 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:55:32.882258 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:55:32.882268 kernel: SMBIOS 3.0.0 present. Feb 13 19:55:32.882275 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:55:32.882282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:55:32.882290 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:55:32.882297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:55:32.882305 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:55:32.882312 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:55:32.882320 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 19:55:32.882327 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:55:32.882335 kernel: cpuidle: using governor menu Feb 13 19:55:32.882342 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:55:32.882350 kernel: ASID allocator initialised with 32768 entries Feb 13 19:55:32.882357 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:55:32.882364 kernel: Serial: AMBA PL011 UART driver Feb 13 19:55:32.882372 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:55:32.882379 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:55:32.882386 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:55:32.882393 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:55:32.882402 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:55:32.882409 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:55:32.882416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:55:32.882424 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:55:32.882431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:55:32.882438 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:55:32.882445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:55:32.882453 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:55:32.882460 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:55:32.882468 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:55:32.882476 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:55:32.882483 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:55:32.882490 kernel: ACPI: Interpreter enabled Feb 13 19:55:32.882497 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:55:32.882504 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:55:32.882512 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:55:32.882519 kernel: printk: console [ttyAMA0] enabled Feb 13 19:55:32.882526 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:55:32.882648 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:55:32.882721 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:55:32.882788 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:55:32.882862 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:55:32.882929 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:55:32.882939 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:55:32.882946 kernel: PCI host bridge to bus 0000:00 Feb 13 19:55:32.883017 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:55:32.883092 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:55:32.883152 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:55:32.883227 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:55:32.883308 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:55:32.883383 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:55:32.883453 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:55:32.883519 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:55:32.883584 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:55:32.883648 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:55:32.883712 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:55:32.883777 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:55:32.883840 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:55:32.883908 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:55:32.883971 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:55:32.883981 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:55:32.883988 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:55:32.883996 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:55:32.884003 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:55:32.884010 kernel: iommu: Default domain type: Translated Feb 13 19:55:32.884018 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:55:32.884025 kernel: efivars: Registered efivars operations Feb 13 19:55:32.884034 kernel: vgaarb: loaded Feb 13 19:55:32.884041 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:55:32.884048 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:55:32.884056 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:55:32.884063 kernel: pnp: PnP ACPI init Feb 13 19:55:32.884137 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:55:32.884148 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:55:32.884155 kernel: NET: Registered PF_INET protocol family Feb 13 19:55:32.884164 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:55:32.884171 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:55:32.884179 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:55:32.884195 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:55:32.884206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:55:32.884214 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:55:32.884222 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:55:32.884229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:55:32.884236 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:55:32.884246 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:55:32.884254 kernel: kvm [1]: HYP mode not available Feb 13 19:55:32.884261 kernel: Initialise system trusted keyrings Feb 13 19:55:32.884268 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:55:32.884275 kernel: Key type asymmetric registered Feb 13 19:55:32.884282 kernel: Asymmetric key parser 'x509' registered Feb 13 19:55:32.884289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:55:32.884297 kernel: io scheduler mq-deadline registered Feb 13 19:55:32.884304 kernel: io scheduler kyber registered Feb 13 19:55:32.884313 kernel: io scheduler bfq registered Feb 13 19:55:32.884335 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:55:32.884342 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:55:32.884350 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:55:32.884427 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:55:32.884437 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:55:32.884444 kernel: thunder_xcv, ver 1.0 Feb 13 19:55:32.884452 kernel: thunder_bgx, ver 1.0 Feb 13 19:55:32.884459 kernel: nicpf, ver 1.0 Feb 13 19:55:32.884468 kernel: nicvf, ver 1.0 Feb 13 19:55:32.884544 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:55:32.884606 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:55:32 UTC (1739476532) Feb 13 19:55:32.884616 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:55:32.884624 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:55:32.884631 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:55:32.884639 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:55:32.884646 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:55:32.884655 kernel: Segment Routing with IPv6 Feb 13 19:55:32.884662 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:55:32.884670 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:55:32.884677 kernel: Key type dns_resolver registered Feb 13 19:55:32.884684 kernel: registered taskstats version 1 Feb 13 19:55:32.884691 kernel: Loading compiled-in X.509 certificates Feb 13 19:55:32.884699 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:55:32.884706 kernel: Key type .fscrypt registered Feb 13 19:55:32.884714 kernel: Key type fscrypt-provisioning registered Feb 13 19:55:32.884723 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:55:32.884730 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:55:32.884738 kernel: ima: No architecture policies found Feb 13 19:55:32.884745 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:55:32.884753 kernel: clk: Disabling unused clocks Feb 13 19:55:32.884760 kernel: Freeing unused kernel memory: 39360K Feb 13 19:55:32.884768 kernel: Run /init as init process Feb 13 19:55:32.884775 kernel: with arguments: Feb 13 19:55:32.884782 kernel: /init Feb 13 19:55:32.884791 kernel: with environment: Feb 13 19:55:32.884798 kernel: HOME=/ Feb 13 19:55:32.884806 kernel: TERM=linux Feb 13 19:55:32.884813 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:55:32.884822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:55:32.884832 systemd[1]: Detected virtualization kvm. Feb 13 19:55:32.884840 systemd[1]: Detected architecture arm64. Feb 13 19:55:32.884858 systemd[1]: Running in initrd. Feb 13 19:55:32.884868 systemd[1]: No hostname configured, using default hostname. Feb 13 19:55:32.884875 systemd[1]: Hostname set to . Feb 13 19:55:32.884883 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:55:32.884891 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:55:32.884898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:55:32.884906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:55:32.884914 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:55:32.884922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:55:32.884932 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:55:32.884939 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:55:32.884949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:55:32.884956 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:55:32.884964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:55:32.884972 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:55:32.884982 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:55:32.884989 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:55:32.884997 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:55:32.885005 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:55:32.885015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:55:32.885024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:55:32.885032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:55:32.885042 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:55:32.885053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:55:32.885062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:55:32.885070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:55:32.885078 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:55:32.885086 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:55:32.885094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:55:32.885102 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:55:32.885112 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:55:32.885120 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:55:32.885127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:55:32.885137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:55:32.885145 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:55:32.885158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:55:32.885168 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:55:32.885177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:55:32.885241 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:55:32.885260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:55:32.885268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:55:32.885278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:55:32.885287 systemd-journald[238]: Journal started Feb 13 19:55:32.885305 systemd-journald[238]: Runtime Journal (/run/log/journal/1b45ac25e49b4a6bae4e790f8e39312e) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:55:32.871941 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:55:32.887586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:55:32.889194 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:55:32.890682 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:55:32.891555 kernel: Bridge firewalling registered Feb 13 19:55:32.892247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:55:32.894176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:55:32.895685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:55:32.899317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:55:32.903674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:55:32.906222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:55:32.908102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:55:32.909210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:55:32.916347 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:55:32.918137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:55:32.927134 dracut-cmdline[275]: dracut-dracut-053 Feb 13 19:55:32.929511 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:55:32.945999 systemd-resolved[278]: Positive Trust Anchors: Feb 13 19:55:32.946015 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:55:32.946046 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:55:32.950704 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 19:55:32.952000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:55:32.953309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:55:32.998218 kernel: SCSI subsystem initialized Feb 13 19:55:33.002200 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:55:33.009209 kernel: iscsi: registered transport (tcp) Feb 13 19:55:33.021483 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:55:33.021501 kernel: QLogic iSCSI HBA Driver Feb 13 19:55:33.062005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:55:33.070329 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:55:33.085514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:55:33.085548 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:55:33.085569 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:55:33.134212 kernel: raid6: neonx8 gen() 15793 MB/s Feb 13 19:55:33.151214 kernel: raid6: neonx4 gen() 15653 MB/s Feb 13 19:55:33.168207 kernel: raid6: neonx2 gen() 13239 MB/s Feb 13 19:55:33.185207 kernel: raid6: neonx1 gen() 10495 MB/s Feb 13 19:55:33.202206 kernel: raid6: int64x8 gen() 6958 MB/s Feb 13 19:55:33.219207 kernel: raid6: int64x4 gen() 7344 MB/s Feb 13 19:55:33.236199 kernel: raid6: int64x2 gen() 6133 MB/s Feb 13 19:55:33.253207 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 19:55:33.253231 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Feb 13 19:55:33.270213 kernel: raid6: .... xor() 11905 MB/s, rmw enabled Feb 13 19:55:33.270236 kernel: raid6: using neon recovery algorithm Feb 13 19:55:33.275496 kernel: xor: measuring software checksum speed Feb 13 19:55:33.275510 kernel: 8regs : 19764 MB/sec Feb 13 19:55:33.275519 kernel: 32regs : 19660 MB/sec Feb 13 19:55:33.276425 kernel: arm64_neon : 27087 MB/sec Feb 13 19:55:33.276447 kernel: xor: using function: arm64_neon (27087 MB/sec) Feb 13 19:55:33.328216 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:55:33.338173 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:55:33.346339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:55:33.356924 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 19:55:33.360028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:55:33.366322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:55:33.377111 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:55:33.401549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:55:33.413299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:55:33.451769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:55:33.462342 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:55:33.475220 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:55:33.476518 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:55:33.479093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:55:33.480113 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:55:33.486426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:55:33.494844 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:55:33.504997 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:55:33.505099 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:55:33.505110 kernel: GPT:9289727 != 19775487 Feb 13 19:55:33.505119 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:55:33.505129 kernel: GPT:9289727 != 19775487 Feb 13 19:55:33.505137 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:55:33.505149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:55:33.495547 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:55:33.504648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:55:33.504766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:55:33.506475 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:55:33.507417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:55:33.507539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:55:33.508930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:55:33.520415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:55:33.531426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:55:33.534258 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (517) Feb 13 19:55:33.535217 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Feb 13 19:55:33.537737 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:55:33.550257 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:55:33.553869 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:55:33.554816 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:55:33.559898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:55:33.571300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:55:33.572694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:55:33.577081 disk-uuid[552]: Primary Header is updated. Feb 13 19:55:33.577081 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:55:33.577081 disk-uuid[552]: Secondary Header is updated. Feb 13 19:55:33.579431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:55:33.591207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:55:33.597917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:55:34.591215 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:55:34.591341 disk-uuid[553]: The operation has completed successfully. Feb 13 19:55:34.610930 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:55:34.611041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:55:34.635373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:55:34.638030 sh[575]: Success Feb 13 19:55:34.653257 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:55:34.680459 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:55:34.695510 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:55:34.698245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:55:34.706973 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:55:34.707004 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:55:34.707015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:55:34.707026 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:55:34.708216 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:55:34.711378 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:55:34.712377 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:55:34.723384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:55:34.724623 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:55:34.731467 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:55:34.731503 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:55:34.731514 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:55:34.733281 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:55:34.739986 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:55:34.741194 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:55:34.747664 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:55:34.753329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:55:34.813618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:55:34.825319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:55:34.845097 systemd-networkd[768]: lo: Link UP Feb 13 19:55:34.845106 systemd-networkd[768]: lo: Gained carrier Feb 13 19:55:34.845767 systemd-networkd[768]: Enumeration completed Feb 13 19:55:34.845855 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:55:34.846272 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:55:34.846274 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:55:34.851153 ignition[666]: Ignition 2.19.0 Feb 13 19:55:34.846979 systemd-networkd[768]: eth0: Link UP Feb 13 19:55:34.851159 ignition[666]: Stage: fetch-offline Feb 13 19:55:34.846982 systemd-networkd[768]: eth0: Gained carrier Feb 13 19:55:34.851208 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:34.846989 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:55:34.851217 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:34.847363 systemd[1]: Reached target network.target - Network. Feb 13 19:55:34.851365 ignition[666]: parsed url from cmdline: "" Feb 13 19:55:34.851368 ignition[666]: no config URL provided Feb 13 19:55:34.851373 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:55:34.851379 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:55:34.851399 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 19:55:34.851403 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:55:34.866056 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 19:55:34.868269 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:55:34.902905 ignition[666]: parsing config with SHA512: dde5c51ca3ac2259f9bcdd351d6f92d3be170b624dc16aebe1edc3f1e7d16ab5a9d7ceb9cbe518a58c0bf654276282537b01f67e7ee6c2ba391e1c32d83ba297 Feb 13 19:55:34.908072 unknown[666]: fetched base config from "system" Feb 13 19:55:34.908081 unknown[666]: fetched user config from "qemu" Feb 13 19:55:34.908542 ignition[666]: fetch-offline: fetch-offline passed Feb 13 19:55:34.908606 ignition[666]: Ignition finished successfully Feb 13 19:55:34.910501 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:55:34.911485 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:55:34.917378 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:55:34.927054 ignition[775]: Ignition 2.19.0 Feb 13 19:55:34.927064 ignition[775]: Stage: kargs Feb 13 19:55:34.927221 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:34.927230 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:34.928079 ignition[775]: kargs: kargs passed Feb 13 19:55:34.928120 ignition[775]: Ignition finished successfully Feb 13 19:55:34.929952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:55:34.931489 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:55:34.943534 ignition[783]: Ignition 2.19.0 Feb 13 19:55:34.943544 ignition[783]: Stage: disks Feb 13 19:55:34.943692 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:34.943701 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:34.944539 ignition[783]: disks: disks passed Feb 13 19:55:34.945879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:55:34.944580 ignition[783]: Ignition finished successfully Feb 13 19:55:34.947388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:55:34.948518 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:55:34.949765 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:55:34.951044 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:55:34.952462 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:55:34.961318 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:55:34.970570 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:55:34.974072 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:55:34.977155 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:55:35.024201 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:55:35.024829 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:55:35.025811 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:55:35.033256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:55:35.034656 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:55:35.035749 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:55:35.035783 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:55:35.040893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Feb 13 19:55:35.035803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:55:35.043739 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:55:35.043758 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:55:35.043769 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:55:35.042123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:55:35.045608 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:55:35.045672 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:55:35.047267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:55:35.083600 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:55:35.087057 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:55:35.090028 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:55:35.093333 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:55:35.158552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:55:35.171302 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:55:35.173444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:55:35.177200 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:55:35.190450 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:55:35.194095 ignition[914]: INFO : Ignition 2.19.0 Feb 13 19:55:35.194095 ignition[914]: INFO : Stage: mount Feb 13 19:55:35.195238 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:35.195238 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:35.195238 ignition[914]: INFO : mount: mount passed Feb 13 19:55:35.195238 ignition[914]: INFO : Ignition finished successfully Feb 13 19:55:35.196414 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:55:35.205288 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:55:35.705715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:55:35.716404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:55:35.721925 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 19:55:35.721964 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:55:35.721985 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:55:35.723189 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:55:35.725209 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:55:35.726040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:55:35.746510 ignition[944]: INFO : Ignition 2.19.0 Feb 13 19:55:35.746510 ignition[944]: INFO : Stage: files Feb 13 19:55:35.748029 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:35.748029 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:35.748029 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:55:35.751386 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:55:35.751386 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:55:35.751386 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:55:35.751386 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:55:35.751386 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:55:35.750874 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:55:35.758544 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:55:35.758544 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:55:36.020395 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:55:36.134320 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 19:55:36.637828 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:55:36.639799 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:55:36.639799 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:55:36.869343 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:55:36.938756 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:55:36.940644 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:55:37.214841 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:55:37.737711 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:55:37.737711 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:55:37.741328 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:55:37.773696 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:55:37.777755 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:55:37.780399 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:55:37.780399 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:55:37.780399 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:55:37.780399 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:55:37.780399 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:55:37.780399 ignition[944]: INFO : files: files passed Feb 13 19:55:37.780399 ignition[944]: INFO : Ignition finished successfully Feb 13 19:55:37.780895 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:55:37.792344 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:55:37.794201 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:55:37.797122 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:55:37.798212 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:55:37.802474 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:55:37.805732 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:55:37.805732 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:55:37.808685 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:55:37.810345 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:55:37.811582 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:55:37.825408 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:55:37.845805 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:55:37.846654 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:55:37.848063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:55:37.848908 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:55:37.851689 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:55:37.853593 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:55:37.871017 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:55:37.882395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:55:37.890296 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:55:37.891292 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:55:37.892841 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:55:37.894107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:55:37.894261 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:55:37.896200 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:55:37.897721 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:55:37.898976 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:55:37.900273 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:55:37.901733 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:55:37.903125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:55:37.904494 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:55:37.905989 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:55:37.907428 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:55:37.909048 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:55:37.911431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:55:37.911874 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:55:37.914459 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:55:37.915332 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:55:37.916721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:55:37.918324 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:55:37.919244 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:55:37.919382 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:55:37.921690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:55:37.921811 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:55:37.923261 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:55:37.924452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:55:37.928278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:55:37.929240 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:55:37.930855 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:55:37.932065 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:55:37.932157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:55:37.933337 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:55:37.933415 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:55:37.934526 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:55:37.934632 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:55:37.935953 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:55:37.936048 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:55:37.948544 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:55:37.949981 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:55:37.950733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:55:37.950869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:55:37.952568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:55:37.952680 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:55:37.958121 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:55:37.959214 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:55:37.962601 ignition[999]: INFO : Ignition 2.19.0 Feb 13 19:55:37.962601 ignition[999]: INFO : Stage: umount Feb 13 19:55:37.965400 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:55:37.965400 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:55:37.965400 ignition[999]: INFO : umount: umount passed Feb 13 19:55:37.965400 ignition[999]: INFO : Ignition finished successfully Feb 13 19:55:37.965741 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:55:37.967801 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:55:37.969996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:55:37.972209 systemd[1]: Stopped target network.target - Network. Feb 13 19:55:37.973407 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:55:37.973482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:55:37.974807 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:55:37.974860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:55:37.977138 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:55:37.977250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:55:37.978908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:55:37.978982 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:55:37.980659 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:55:37.982227 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:55:37.984247 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:55:37.984339 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:55:37.986093 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:55:37.986218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:55:37.989294 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 19:55:37.991784 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:55:37.991940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:55:37.994440 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:55:37.994540 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:55:37.997614 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:55:37.997667 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:55:38.007353 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:55:38.008268 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:55:38.008340 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:55:38.010331 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:55:38.010380 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:55:38.012059 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:55:38.012110 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:55:38.014237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:55:38.014289 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:55:38.016226 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:55:38.025504 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:55:38.025607 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:55:38.031410 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:55:38.032467 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:55:38.035241 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:55:38.035290 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:55:38.037262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:55:38.037300 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:55:38.040076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:55:38.040134 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:55:38.043063 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:55:38.043117 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:55:38.045582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:55:38.045628 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:55:38.059365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:55:38.060115 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:55:38.060173 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:55:38.062022 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:55:38.062064 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:55:38.063739 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:55:38.063777 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:55:38.065591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:55:38.065631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:55:38.067555 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:55:38.067653 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:55:38.069635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:55:38.073088 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:55:38.083977 systemd[1]: Switching root. Feb 13 19:55:38.102228 systemd-journald[238]: Journal stopped Feb 13 19:55:38.864601 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:55:38.864654 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:55:38.864666 kernel: SELinux: policy capability open_perms=1 Feb 13 19:55:38.864676 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:55:38.864690 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:55:38.864704 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:55:38.864714 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:55:38.864724 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:55:38.864733 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:55:38.864743 kernel: audit: type=1403 audit(1739476538.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:55:38.864754 systemd[1]: Successfully loaded SELinux policy in 33.024ms. Feb 13 19:55:38.864772 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.531ms. Feb 13 19:55:38.864785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:55:38.864798 systemd[1]: Detected virtualization kvm. Feb 13 19:55:38.864808 systemd[1]: Detected architecture arm64. Feb 13 19:55:38.864827 systemd[1]: Detected first boot. Feb 13 19:55:38.864840 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:55:38.864850 zram_generator::config[1044]: No configuration found. Feb 13 19:55:38.864862 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:55:38.864873 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:55:38.864883 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:55:38.864896 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:55:38.864907 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:55:38.864918 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:55:38.864928 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:55:38.864940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:55:38.864955 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:55:38.864965 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:55:38.864976 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:55:38.864987 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:55:38.864999 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:55:38.865010 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:55:38.865020 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:55:38.865031 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:55:38.865042 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:55:38.865053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:55:38.865063 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:55:38.865074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:55:38.865084 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:55:38.865096 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:55:38.865107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:55:38.865117 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:55:38.865128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:55:38.865139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:55:38.865149 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:55:38.865160 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:55:38.865171 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:55:38.865194 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:55:38.865208 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:55:38.865218 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:55:38.865229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:55:38.865240 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:55:38.865250 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:55:38.865260 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:55:38.865271 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:55:38.865282 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:55:38.865295 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:55:38.865305 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:55:38.865316 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:55:38.865327 systemd[1]: Reached target machines.target - Containers. Feb 13 19:55:38.865338 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:55:38.865348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:55:38.865359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:55:38.865369 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:55:38.865381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:55:38.865392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:55:38.865402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:55:38.865414 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:55:38.865424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:55:38.865435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:55:38.865445 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:55:38.865456 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:55:38.865468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:55:38.865478 kernel: fuse: init (API version 7.39) Feb 13 19:55:38.865489 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:55:38.865499 kernel: loop: module loaded Feb 13 19:55:38.865509 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:55:38.865519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:55:38.865529 kernel: ACPI: bus type drm_connector registered Feb 13 19:55:38.865543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:55:38.865555 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:55:38.865570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:55:38.865581 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:55:38.865591 systemd[1]: Stopped verity-setup.service. Feb 13 19:55:38.865618 systemd-journald[1112]: Collecting audit messages is disabled. Feb 13 19:55:38.865640 systemd-journald[1112]: Journal started Feb 13 19:55:38.865661 systemd-journald[1112]: Runtime Journal (/run/log/journal/1b45ac25e49b4a6bae4e790f8e39312e) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:55:38.660421 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:55:38.691877 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:55:38.692257 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:55:38.868218 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:55:38.868334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:55:38.869212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:55:38.870099 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:55:38.870993 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:55:38.871933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:55:38.872909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:55:38.873937 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:55:38.875097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:55:38.877478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:55:38.877630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:55:38.878763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:55:38.878906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:55:38.881501 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:55:38.881644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:55:38.882778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:55:38.882921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:55:38.884090 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:55:38.884239 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:55:38.885273 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:55:38.885400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:55:38.887571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:55:38.888629 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:55:38.889761 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:55:38.902611 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:55:38.920356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:55:38.922359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:55:38.923494 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:55:38.923535 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:55:38.925554 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:55:38.927789 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:55:38.929951 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:55:38.931090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:55:38.932752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:55:38.935018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:55:38.936021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:55:38.939377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:55:38.940196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:55:38.944397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:55:38.944555 systemd-journald[1112]: Time spent on flushing to /var/log/journal/1b45ac25e49b4a6bae4e790f8e39312e is 14.557ms for 859 entries. Feb 13 19:55:38.944555 systemd-journald[1112]: System Journal (/var/log/journal/1b45ac25e49b4a6bae4e790f8e39312e) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:55:38.985321 systemd-journald[1112]: Received client request to flush runtime journal. Feb 13 19:55:38.985360 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:55:38.948260 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:55:38.953160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:55:38.955622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:55:38.956843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:55:38.957865 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:55:38.959045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:55:38.960400 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:55:38.964620 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:55:38.979379 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:55:38.984806 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:55:38.985855 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Feb 13 19:55:38.985867 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Feb 13 19:55:38.990349 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:55:38.991762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:55:38.997767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:55:39.002273 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:55:39.012064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:55:39.013721 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:55:39.015226 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:55:39.021945 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:55:39.030997 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 19:55:39.047563 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:55:39.057371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:55:39.071910 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Feb 13 19:55:39.071932 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Feb 13 19:55:39.077249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:55:39.081209 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:55:39.123215 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:55:39.131271 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 19:55:39.140203 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 19:55:39.144797 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:55:39.145214 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 19:55:39.149049 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:55:39.149159 systemd[1]: Reloading... Feb 13 19:55:39.204222 zram_generator::config[1208]: No configuration found. Feb 13 19:55:39.299224 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:55:39.302136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:55:39.342032 systemd[1]: Reloading finished in 192 ms. Feb 13 19:55:39.370763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:55:39.373217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:55:39.383377 systemd[1]: Starting ensure-sysext.service... Feb 13 19:55:39.385079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:55:39.395745 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:55:39.395762 systemd[1]: Reloading... Feb 13 19:55:39.408135 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:55:39.408412 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:55:39.409039 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:55:39.409258 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:55:39.409317 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:55:39.412519 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:55:39.412533 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:55:39.423210 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:55:39.423223 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:55:39.451226 zram_generator::config[1274]: No configuration found. Feb 13 19:55:39.538321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:55:39.577979 systemd[1]: Reloading finished in 181 ms. Feb 13 19:55:39.593983 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:55:39.607582 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:55:39.615383 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:55:39.618421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:55:39.620791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:55:39.623975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:55:39.626469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:55:39.630475 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:55:39.640253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:55:39.642771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:55:39.646303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:55:39.649423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:55:39.650763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:55:39.654593 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:55:39.656632 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:55:39.657398 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Feb 13 19:55:39.658584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:55:39.659375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:55:39.661580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:55:39.663352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:55:39.665113 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:55:39.665279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:55:39.672723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:55:39.672961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:55:39.677510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:55:39.681750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:55:39.686522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:55:39.691530 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:55:39.693998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:55:39.697172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:55:39.698145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:55:39.698840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:55:39.703014 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:55:39.706838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:55:39.712999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:55:39.713153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:55:39.717784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:55:39.717923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:55:39.719294 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:55:39.719425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:55:39.726317 systemd[1]: Finished ensure-sysext.service. Feb 13 19:55:39.734283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1334) Feb 13 19:55:39.749155 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:55:39.754357 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:55:39.756300 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:55:39.758305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:55:39.762711 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:55:39.768556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:55:39.770068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:55:39.770124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:55:39.771592 augenrules[1373]: No rules Feb 13 19:55:39.772365 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:55:39.775547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:55:39.776169 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:55:39.812802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:55:39.826830 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:55:39.828144 systemd-resolved[1312]: Positive Trust Anchors: Feb 13 19:55:39.828157 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:55:39.828203 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:55:39.836438 systemd-resolved[1312]: Defaulting to hostname 'linux'. Feb 13 19:55:39.840448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:55:39.842267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:55:39.852209 systemd-networkd[1374]: lo: Link UP Feb 13 19:55:39.852220 systemd-networkd[1374]: lo: Gained carrier Feb 13 19:55:39.852876 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:55:39.854147 systemd-networkd[1374]: Enumeration completed Feb 13 19:55:39.854210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:55:39.855200 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:55:39.855288 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:55:39.855330 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:55:39.856313 systemd-networkd[1374]: eth0: Link UP Feb 13 19:55:39.856685 systemd-networkd[1374]: eth0: Gained carrier Feb 13 19:55:39.856782 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:55:39.856849 systemd[1]: Reached target network.target - Network. Feb 13 19:55:39.858039 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:55:39.869374 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:55:39.870274 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:55:39.874298 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Feb 13 19:55:39.875299 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:55:39.875356 systemd-timesyncd[1380]: Initial clock synchronization to Thu 2025-02-13 19:55:39.480895 UTC. Feb 13 19:55:39.880840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:55:39.903630 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:55:39.917388 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:55:39.929289 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:55:39.936363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:55:39.961227 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:55:39.963097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:55:39.964300 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:55:39.965446 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:55:39.966697 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:55:39.968129 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:55:39.969460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:55:39.970671 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:55:39.971894 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:55:39.971934 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:55:39.972841 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:55:39.974641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:55:39.977078 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:55:39.986305 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:55:39.988338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:55:39.989641 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:55:39.990566 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:55:39.991288 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:55:39.992153 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:55:39.992204 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:55:39.993213 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:55:39.995043 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:55:39.998312 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:55:39.999339 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:55:40.001962 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:55:40.003800 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:55:40.008342 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:55:40.010319 jq[1409]: false Feb 13 19:55:40.012331 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:55:40.014393 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:55:40.017432 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:55:40.020382 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:55:40.021948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:55:40.022443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:55:40.023143 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:55:40.026530 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:55:40.028110 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:55:40.031098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:55:40.031293 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:55:40.032018 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:55:40.033246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:55:40.035296 dbus-daemon[1408]: [system] SELinux support is enabled Feb 13 19:55:40.036007 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:55:40.038885 jq[1421]: true Feb 13 19:55:40.040926 extend-filesystems[1410]: Found loop3 Feb 13 19:55:40.040926 extend-filesystems[1410]: Found loop4 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found loop5 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda1 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda2 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda3 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found usr Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda4 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda6 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda7 Feb 13 19:55:40.053749 extend-filesystems[1410]: Found vda9 Feb 13 19:55:40.053749 extend-filesystems[1410]: Checking size of /dev/vda9 Feb 13 19:55:40.048863 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:55:40.049043 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:55:40.066475 jq[1430]: true Feb 13 19:55:40.075107 extend-filesystems[1410]: Resized partition /dev/vda9 Feb 13 19:55:40.079358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1347) Feb 13 19:55:40.075213 (ntainerd)[1435]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:55:40.079681 tar[1425]: linux-arm64/LICENSE Feb 13 19:55:40.079681 tar[1425]: linux-arm64/helm Feb 13 19:55:40.079845 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:55:40.077305 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:55:40.077330 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:55:40.080308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:55:40.080326 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:55:40.084173 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:55:40.109288 update_engine[1420]: I20250213 19:55:40.109061 1420 main.cc:92] Flatcar Update Engine starting Feb 13 19:55:40.111211 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:55:40.118250 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:55:40.120405 update_engine[1420]: I20250213 19:55:40.119153 1420 update_check_scheduler.cc:74] Next update check in 5m46s Feb 13 19:55:40.126236 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:55:40.126634 systemd-logind[1418]: New seat seat0. Feb 13 19:55:40.132874 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:55:40.132874 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:55:40.132874 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:55:40.131620 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:55:40.144297 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:55:40.144381 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Feb 13 19:55:40.132545 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:55:40.133817 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:55:40.134086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:55:40.139696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:55:40.145122 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:55:40.196518 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:55:40.286634 containerd[1435]: time="2025-02-13T19:55:40.286543875Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:55:40.315220 containerd[1435]: time="2025-02-13T19:55:40.315163714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.316625 containerd[1435]: time="2025-02-13T19:55:40.316564802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:55:40.316625 containerd[1435]: time="2025-02-13T19:55:40.316603021Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:55:40.316625 containerd[1435]: time="2025-02-13T19:55:40.316626142Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:55:40.316799 containerd[1435]: time="2025-02-13T19:55:40.316771146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:55:40.316799 containerd[1435]: time="2025-02-13T19:55:40.316795446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.316860 containerd[1435]: time="2025-02-13T19:55:40.316845796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:55:40.316880 containerd[1435]: time="2025-02-13T19:55:40.316860589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317034 containerd[1435]: time="2025-02-13T19:55:40.317008520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317034 containerd[1435]: time="2025-02-13T19:55:40.317030044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317203 containerd[1435]: time="2025-02-13T19:55:40.317119640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317203 containerd[1435]: time="2025-02-13T19:55:40.317132760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317266 containerd[1435]: time="2025-02-13T19:55:40.317233079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317468 containerd[1435]: time="2025-02-13T19:55:40.317442389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317563 containerd[1435]: time="2025-02-13T19:55:40.317545902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:55:40.317592 containerd[1435]: time="2025-02-13T19:55:40.317562407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:55:40.317652 containerd[1435]: time="2025-02-13T19:55:40.317636867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:55:40.317725 containerd[1435]: time="2025-02-13T19:55:40.317680638Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:55:40.321130 containerd[1435]: time="2025-02-13T19:55:40.321103555Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:55:40.321208 containerd[1435]: time="2025-02-13T19:55:40.321158735Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:55:40.321257 containerd[1435]: time="2025-02-13T19:55:40.321230152Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:55:40.321257 containerd[1435]: time="2025-02-13T19:55:40.321253084Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:55:40.321318 containerd[1435]: time="2025-02-13T19:55:40.321266622Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:55:40.321409 containerd[1435]: time="2025-02-13T19:55:40.321388846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:55:40.321656 containerd[1435]: time="2025-02-13T19:55:40.321636830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:55:40.321761 containerd[1435]: time="2025-02-13T19:55:40.321743767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:55:40.321795 containerd[1435]: time="2025-02-13T19:55:40.321764454Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:55:40.321795 containerd[1435]: time="2025-02-13T19:55:40.321777955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:55:40.321795 containerd[1435]: time="2025-02-13T19:55:40.321791835Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321851 containerd[1435]: time="2025-02-13T19:55:40.321804575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321851 containerd[1435]: time="2025-02-13T19:55:40.321837089Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321883 containerd[1435]: time="2025-02-13T19:55:40.321850513Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321883 containerd[1435]: time="2025-02-13T19:55:40.321866295Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321883 containerd[1435]: time="2025-02-13T19:55:40.321878692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321937 containerd[1435]: time="2025-02-13T19:55:40.321890595Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321937 containerd[1435]: time="2025-02-13T19:55:40.321901738Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:55:40.321937 containerd[1435]: time="2025-02-13T19:55:40.321919725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.321937 containerd[1435]: time="2025-02-13T19:55:40.321935887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322006 containerd[1435]: time="2025-02-13T19:55:40.321948247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322006 containerd[1435]: time="2025-02-13T19:55:40.321960188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322006 containerd[1435]: time="2025-02-13T19:55:40.321971064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322006 containerd[1435]: time="2025-02-13T19:55:40.321984184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322006 containerd[1435]: time="2025-02-13T19:55:40.321995440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322007495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322019779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322033697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322044915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322055678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322066858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322087 containerd[1435]: time="2025-02-13T19:55:40.322084541Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:55:40.322230 containerd[1435]: time="2025-02-13T19:55:40.322103518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322230 containerd[1435]: time="2025-02-13T19:55:40.322115002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.322230 containerd[1435]: time="2025-02-13T19:55:40.322135994Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:55:40.322807 containerd[1435]: time="2025-02-13T19:55:40.322774761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:55:40.322978 containerd[1435]: time="2025-02-13T19:55:40.322959313Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:55:40.324075 containerd[1435]: time="2025-02-13T19:55:40.324034534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:55:40.324129 containerd[1435]: time="2025-02-13T19:55:40.324081576Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:55:40.324129 containerd[1435]: time="2025-02-13T19:55:40.324095000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.324129 containerd[1435]: time="2025-02-13T19:55:40.324113064Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:55:40.324129 containerd[1435]: time="2025-02-13T19:55:40.324125993Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:55:40.324216 containerd[1435]: time="2025-02-13T19:55:40.324136337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:55:40.324558 containerd[1435]: time="2025-02-13T19:55:40.324501260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:55:40.324679 containerd[1435]: time="2025-02-13T19:55:40.324567353Z" level=info msg="Connect containerd service" Feb 13 19:55:40.324679 containerd[1435]: time="2025-02-13T19:55:40.324602035Z" level=info msg="using legacy CRI server" Feb 13 19:55:40.324679 containerd[1435]: time="2025-02-13T19:55:40.324613862Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:55:40.324731 containerd[1435]: time="2025-02-13T19:55:40.324702469Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:55:40.325792 containerd[1435]: time="2025-02-13T19:55:40.325751640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:55:40.325984 containerd[1435]: time="2025-02-13T19:55:40.325936878Z" level=info msg="Start subscribing containerd event" Feb 13 19:55:40.325984 containerd[1435]: time="2025-02-13T19:55:40.325981181Z" level=info msg="Start recovering state" Feb 13 19:55:40.326203 containerd[1435]: time="2025-02-13T19:55:40.326040848Z" level=info msg="Start event monitor" Feb 13 19:55:40.326203 containerd[1435]: time="2025-02-13T19:55:40.326063931Z" level=info msg="Start snapshots syncer" Feb 13 19:55:40.326203 containerd[1435]: time="2025-02-13T19:55:40.326073743Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:55:40.326203 containerd[1435]: time="2025-02-13T19:55:40.326080702Z" level=info msg="Start streaming server" Feb 13 19:55:40.326630 containerd[1435]: time="2025-02-13T19:55:40.326606067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:55:40.326674 containerd[1435]: time="2025-02-13T19:55:40.326652766Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:55:40.326713 containerd[1435]: time="2025-02-13T19:55:40.326696499Z" level=info msg="containerd successfully booted in 0.041630s" Feb 13 19:55:40.326826 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:55:40.461395 tar[1425]: linux-arm64/README.md Feb 13 19:55:40.473620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:55:40.788602 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:55:40.807408 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:55:40.821418 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:55:40.827531 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:55:40.827722 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:55:40.831831 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:55:40.844899 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:55:40.847269 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:55:40.849087 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:55:40.850125 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:55:41.446323 systemd-networkd[1374]: eth0: Gained IPv6LL Feb 13 19:55:41.449211 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:55:41.452359 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:55:41.461444 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:55:41.463917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:55:41.465958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:55:41.480445 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:55:41.480638 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:55:41.482315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:55:41.485346 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:55:41.977434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:55:41.978642 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:55:41.981448 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:55:41.981549 systemd[1]: Startup finished in 531ms (kernel) + 5.560s (initrd) + 3.754s (userspace) = 9.846s. Feb 13 19:55:42.381956 kubelet[1522]: E0213 19:55:42.381850 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:55:42.383990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:55:42.384151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:55:44.864923 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:55:44.866027 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:48392.service - OpenSSH per-connection server daemon (10.0.0.1:48392). Feb 13 19:55:44.916339 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 48392 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:44.919688 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:44.938392 systemd-logind[1418]: New session 1 of user core. Feb 13 19:55:44.939371 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:55:44.948418 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:55:44.956858 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:55:44.958901 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:55:44.964891 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:55:45.046441 systemd[1539]: Queued start job for default target default.target. Feb 13 19:55:45.056152 systemd[1539]: Created slice app.slice - User Application Slice. Feb 13 19:55:45.056215 systemd[1539]: Reached target paths.target - Paths. Feb 13 19:55:45.056229 systemd[1539]: Reached target timers.target - Timers. Feb 13 19:55:45.057324 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:55:45.065836 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:55:45.065894 systemd[1539]: Reached target sockets.target - Sockets. Feb 13 19:55:45.065905 systemd[1539]: Reached target basic.target - Basic System. Feb 13 19:55:45.065941 systemd[1539]: Reached target default.target - Main User Target. Feb 13 19:55:45.065966 systemd[1539]: Startup finished in 96ms. Feb 13 19:55:45.066143 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:55:45.067600 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:55:45.135152 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:48408.service - OpenSSH per-connection server daemon (10.0.0.1:48408). Feb 13 19:55:45.168501 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 48408 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.169644 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.173454 systemd-logind[1418]: New session 2 of user core. Feb 13 19:55:45.184312 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:55:45.233210 sshd[1550]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.241291 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:48408.service: Deactivated successfully. Feb 13 19:55:45.242542 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:55:45.244391 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:55:45.245517 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:48420.service - OpenSSH per-connection server daemon (10.0.0.1:48420). Feb 13 19:55:45.246060 systemd-logind[1418]: Removed session 2. Feb 13 19:55:45.279144 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 48420 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.280379 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.283710 systemd-logind[1418]: New session 3 of user core. Feb 13 19:55:45.293317 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:55:45.339657 sshd[1557]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.352195 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:48420.service: Deactivated successfully. Feb 13 19:55:45.353374 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:55:45.355420 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:55:45.356733 systemd-logind[1418]: Removed session 3. Feb 13 19:55:45.358441 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:48436.service - OpenSSH per-connection server daemon (10.0.0.1:48436). Feb 13 19:55:45.391466 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 48436 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.392824 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.397237 systemd-logind[1418]: New session 4 of user core. Feb 13 19:55:45.410345 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:55:45.460084 sshd[1564]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.474343 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:48436.service: Deactivated successfully. Feb 13 19:55:45.475519 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:55:45.476610 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:55:45.479433 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:48442.service - OpenSSH per-connection server daemon (10.0.0.1:48442). Feb 13 19:55:45.480080 systemd-logind[1418]: Removed session 4. Feb 13 19:55:45.511150 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 48442 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.512245 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.515255 systemd-logind[1418]: New session 5 of user core. Feb 13 19:55:45.523308 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:55:45.576925 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:55:45.577222 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:55:45.591833 sudo[1574]: pam_unix(sudo:session): session closed for user root Feb 13 19:55:45.595224 sshd[1571]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.604353 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:48442.service: Deactivated successfully. Feb 13 19:55:45.605676 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:55:45.606873 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:55:45.615423 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:48456.service - OpenSSH per-connection server daemon (10.0.0.1:48456). Feb 13 19:55:45.616153 systemd-logind[1418]: Removed session 5. Feb 13 19:55:45.646136 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 48456 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.647431 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.651173 systemd-logind[1418]: New session 6 of user core. Feb 13 19:55:45.666315 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:55:45.715125 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:55:45.715412 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:55:45.718242 sudo[1583]: pam_unix(sudo:session): session closed for user root Feb 13 19:55:45.722606 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:55:45.722884 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:55:45.740525 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:55:45.741633 auditctl[1586]: No rules Feb 13 19:55:45.742441 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:55:45.743261 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:55:45.744935 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:55:45.767208 augenrules[1604]: No rules Feb 13 19:55:45.770229 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:55:45.771236 sudo[1582]: pam_unix(sudo:session): session closed for user root Feb 13 19:55:45.772691 sshd[1579]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.783406 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:48456.service: Deactivated successfully. Feb 13 19:55:45.784890 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:55:45.786536 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:55:45.803520 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:48468.service - OpenSSH per-connection server daemon (10.0.0.1:48468). Feb 13 19:55:45.804323 systemd-logind[1418]: Removed session 6. Feb 13 19:55:45.833608 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 48468 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:55:45.834789 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.838190 systemd-logind[1418]: New session 7 of user core. Feb 13 19:55:45.846391 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:55:45.895704 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:55:45.896323 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:55:46.243403 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:55:46.243499 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:55:46.514430 dockerd[1634]: time="2025-02-13T19:55:46.514308759Z" level=info msg="Starting up" Feb 13 19:55:46.725545 dockerd[1634]: time="2025-02-13T19:55:46.725497628Z" level=info msg="Loading containers: start." Feb 13 19:55:46.821215 kernel: Initializing XFRM netlink socket Feb 13 19:55:46.885218 systemd-networkd[1374]: docker0: Link UP Feb 13 19:55:46.906466 dockerd[1634]: time="2025-02-13T19:55:46.906428700Z" level=info msg="Loading containers: done." Feb 13 19:55:46.922383 dockerd[1634]: time="2025-02-13T19:55:46.922325764Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:55:46.922582 dockerd[1634]: time="2025-02-13T19:55:46.922441193Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:55:46.922582 dockerd[1634]: time="2025-02-13T19:55:46.922560964Z" level=info msg="Daemon has completed initialization" Feb 13 19:55:46.949897 dockerd[1634]: time="2025-02-13T19:55:46.949601102Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:55:46.949846 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:55:47.444921 containerd[1435]: time="2025-02-13T19:55:47.444882301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:55:48.072180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096987094.mount: Deactivated successfully. Feb 13 19:55:49.128682 containerd[1435]: time="2025-02-13T19:55:49.128584663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:49.129617 containerd[1435]: time="2025-02-13T19:55:49.129551008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 19:55:49.130161 containerd[1435]: time="2025-02-13T19:55:49.130124975Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:49.134803 containerd[1435]: time="2025-02-13T19:55:49.134754937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:49.135794 containerd[1435]: time="2025-02-13T19:55:49.135645936Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 1.690714748s" Feb 13 19:55:49.135919 containerd[1435]: time="2025-02-13T19:55:49.135899128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:55:49.137164 containerd[1435]: time="2025-02-13T19:55:49.137119020Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:55:50.824532 containerd[1435]: time="2025-02-13T19:55:50.824100777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:50.825286 containerd[1435]: time="2025-02-13T19:55:50.825202031Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 19:55:50.825882 containerd[1435]: time="2025-02-13T19:55:50.825848301Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:50.828684 containerd[1435]: time="2025-02-13T19:55:50.828624592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:50.830037 containerd[1435]: time="2025-02-13T19:55:50.830000992Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.692842581s" Feb 13 19:55:50.830081 containerd[1435]: time="2025-02-13T19:55:50.830041500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:55:50.830594 containerd[1435]: time="2025-02-13T19:55:50.830519738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:55:52.456020 containerd[1435]: time="2025-02-13T19:55:52.455970682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:52.456896 containerd[1435]: time="2025-02-13T19:55:52.456511262Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 19:55:52.457550 containerd[1435]: time="2025-02-13T19:55:52.457517374Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:52.460617 containerd[1435]: time="2025-02-13T19:55:52.460583829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:52.462428 containerd[1435]: time="2025-02-13T19:55:52.462309923Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.631748788s" Feb 13 19:55:52.462428 containerd[1435]: time="2025-02-13T19:55:52.462364931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:55:52.463032 containerd[1435]: time="2025-02-13T19:55:52.462907729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:55:52.634449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:55:52.648420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:55:52.750336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:55:52.754367 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:55:52.792862 kubelet[1851]: E0213 19:55:52.792804 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:55:52.795472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:55:52.795596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:55:53.512850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032049556.mount: Deactivated successfully. Feb 13 19:55:54.153281 containerd[1435]: time="2025-02-13T19:55:54.153215906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:54.154671 containerd[1435]: time="2025-02-13T19:55:54.154597565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 19:55:54.156487 containerd[1435]: time="2025-02-13T19:55:54.155576349Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:54.157805 containerd[1435]: time="2025-02-13T19:55:54.157771061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:54.158999 containerd[1435]: time="2025-02-13T19:55:54.158972063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.696032989s" Feb 13 19:55:54.159103 containerd[1435]: time="2025-02-13T19:55:54.159087301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:55:54.159583 containerd[1435]: time="2025-02-13T19:55:54.159562581Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:55:54.886531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864399515.mount: Deactivated successfully. Feb 13 19:55:55.865539 containerd[1435]: time="2025-02-13T19:55:55.865478043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:55.869777 containerd[1435]: time="2025-02-13T19:55:55.869746269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 19:55:55.870786 containerd[1435]: time="2025-02-13T19:55:55.870756069Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:55.873717 containerd[1435]: time="2025-02-13T19:55:55.873683391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:55.875095 containerd[1435]: time="2025-02-13T19:55:55.874983845Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.715292837s" Feb 13 19:55:55.875095 containerd[1435]: time="2025-02-13T19:55:55.875021950Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:55:55.875612 containerd[1435]: time="2025-02-13T19:55:55.875594795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:55:56.353083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164109707.mount: Deactivated successfully. Feb 13 19:55:56.357510 containerd[1435]: time="2025-02-13T19:55:56.357465752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:56.358433 containerd[1435]: time="2025-02-13T19:55:56.358404657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:55:56.359270 containerd[1435]: time="2025-02-13T19:55:56.359241916Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:56.361402 containerd[1435]: time="2025-02-13T19:55:56.361353676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:55:56.362254 containerd[1435]: time="2025-02-13T19:55:56.362221079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 486.601007ms" Feb 13 19:55:56.362301 containerd[1435]: time="2025-02-13T19:55:56.362253569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:55:56.362766 containerd[1435]: time="2025-02-13T19:55:56.362737457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:55:57.018416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713549050.mount: Deactivated successfully. Feb 13 19:56:00.225948 containerd[1435]: time="2025-02-13T19:56:00.225622831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:00.226796 containerd[1435]: time="2025-02-13T19:56:00.226460085Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 19:56:00.227446 containerd[1435]: time="2025-02-13T19:56:00.227410951Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:00.230508 containerd[1435]: time="2025-02-13T19:56:00.230478213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:00.231820 containerd[1435]: time="2025-02-13T19:56:00.231782229Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.869009475s" Feb 13 19:56:00.231859 containerd[1435]: time="2025-02-13T19:56:00.231818146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:56:02.878710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:56:02.889610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:03.004212 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:56:03.006921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:03.041877 kubelet[2007]: E0213 19:56:03.041763 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:56:03.044483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:56:03.044621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:56:04.113440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:04.128407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:04.151442 systemd[1]: Reloading requested from client PID 2022 ('systemctl') (unit session-7.scope)... Feb 13 19:56:04.151589 systemd[1]: Reloading... Feb 13 19:56:04.207214 zram_generator::config[2061]: No configuration found. Feb 13 19:56:04.295933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:04.353787 systemd[1]: Reloading finished in 201 ms. Feb 13 19:56:04.405797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:56:04.405864 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:56:04.406085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:04.409422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:04.520491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:04.524270 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:56:04.558114 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:56:04.558114 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:56:04.558114 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:56:04.558423 kubelet[2105]: I0213 19:56:04.558169 2105 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:56:04.969224 kubelet[2105]: I0213 19:56:04.968688 2105 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:56:04.969224 kubelet[2105]: I0213 19:56:04.968720 2105 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:56:04.969224 kubelet[2105]: I0213 19:56:04.968944 2105 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:56:05.014056 kubelet[2105]: E0213 19:56:05.013996 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.015525 kubelet[2105]: I0213 19:56:05.015496 2105 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:56:05.021962 kubelet[2105]: E0213 19:56:05.021814 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:56:05.021962 kubelet[2105]: I0213 19:56:05.021875 2105 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:56:05.024521 kubelet[2105]: I0213 19:56:05.024481 2105 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:56:05.025634 kubelet[2105]: I0213 19:56:05.025593 2105 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:56:05.025788 kubelet[2105]: I0213 19:56:05.025629 2105 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:56:05.025863 kubelet[2105]: I0213 19:56:05.025851 2105 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:56:05.025863 kubelet[2105]: I0213 19:56:05.025861 2105 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:56:05.026056 kubelet[2105]: I0213 19:56:05.026033 2105 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:56:05.030215 kubelet[2105]: I0213 19:56:05.030180 2105 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:56:05.030215 kubelet[2105]: I0213 19:56:05.030215 2105 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:56:05.030274 kubelet[2105]: I0213 19:56:05.030233 2105 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:56:05.030274 kubelet[2105]: I0213 19:56:05.030243 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:56:05.035634 kubelet[2105]: I0213 19:56:05.032992 2105 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:56:05.035634 kubelet[2105]: I0213 19:56:05.033892 2105 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:56:05.035634 kubelet[2105]: W0213 19:56:05.034138 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:56:05.035634 kubelet[2105]: I0213 19:56:05.035118 2105 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:56:05.035634 kubelet[2105]: I0213 19:56:05.035145 2105 server.go:1287] "Started kubelet" Feb 13 19:56:05.035634 kubelet[2105]: W0213 19:56:05.035422 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.035634 kubelet[2105]: E0213 19:56:05.035465 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.035634 kubelet[2105]: W0213 19:56:05.035616 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.035854 kubelet[2105]: E0213 19:56:05.035657 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.035854 kubelet[2105]: I0213 19:56:05.035662 2105 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:56:05.037113 kubelet[2105]: I0213 19:56:05.036483 2105 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:56:05.037113 kubelet[2105]: I0213 19:56:05.036894 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:56:05.037214 kubelet[2105]: I0213 19:56:05.037147 2105 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:56:05.038938 kubelet[2105]: I0213 19:56:05.038921 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:56:05.040944 kubelet[2105]: I0213 19:56:05.040916 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:56:05.041998 kubelet[2105]: E0213 19:56:05.041966 2105 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:56:05.041998 kubelet[2105]: I0213 19:56:05.041999 2105 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:56:05.042205 kubelet[2105]: I0213 19:56:05.042172 2105 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:56:05.042259 kubelet[2105]: E0213 19:56:05.042208 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="200ms" Feb 13 19:56:05.042305 kubelet[2105]: I0213 19:56:05.042290 2105 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:56:05.042607 kubelet[2105]: W0213 19:56:05.042577 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.042641 kubelet[2105]: E0213 19:56:05.042613 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.043026 kubelet[2105]: I0213 19:56:05.042981 2105 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:56:05.043164 kubelet[2105]: I0213 19:56:05.043075 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:56:05.043581 kubelet[2105]: E0213 19:56:05.043552 2105 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:56:05.043937 kubelet[2105]: I0213 19:56:05.043890 2105 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:56:05.044028 kubelet[2105]: E0213 19:56:05.043547 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.127:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.127:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dcb8420918e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:56:05.035129061 +0000 UTC m=+0.507713745,LastTimestamp:2025-02-13 19:56:05.035129061 +0000 UTC m=+0.507713745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:56:05.056614 kubelet[2105]: I0213 19:56:05.056577 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:56:05.057594 kubelet[2105]: I0213 19:56:05.057568 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:56:05.057594 kubelet[2105]: I0213 19:56:05.057591 2105 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:56:05.057809 kubelet[2105]: I0213 19:56:05.057601 2105 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:56:05.057809 kubelet[2105]: I0213 19:56:05.057612 2105 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:56:05.057809 kubelet[2105]: I0213 19:56:05.057620 2105 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:56:05.057809 kubelet[2105]: I0213 19:56:05.057620 2105 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:56:05.057809 kubelet[2105]: I0213 19:56:05.057637 2105 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:56:05.057809 kubelet[2105]: E0213 19:56:05.057656 2105 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:56:05.058081 kubelet[2105]: W0213 19:56:05.058014 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.058081 kubelet[2105]: E0213 19:56:05.058039 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.142389 kubelet[2105]: E0213 19:56:05.142344 2105 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:56:05.157818 kubelet[2105]: E0213 19:56:05.157772 2105 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:56:05.243142 kubelet[2105]: E0213 19:56:05.242942 2105 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:56:05.243413 kubelet[2105]: E0213 19:56:05.243380 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="400ms" Feb 13 19:56:05.293778 kubelet[2105]: I0213 19:56:05.293746 2105 policy_none.go:49] "None policy: Start" Feb 13 19:56:05.293778 kubelet[2105]: I0213 19:56:05.293777 2105 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:56:05.293917 kubelet[2105]: I0213 19:56:05.293790 2105 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:56:05.299359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:56:05.317203 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:56:05.320590 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:56:05.333923 kubelet[2105]: I0213 19:56:05.333880 2105 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:56:05.334312 kubelet[2105]: I0213 19:56:05.334079 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:56:05.334312 kubelet[2105]: I0213 19:56:05.334098 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:56:05.334411 kubelet[2105]: I0213 19:56:05.334381 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:56:05.335788 kubelet[2105]: E0213 19:56:05.335762 2105 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:56:05.335851 kubelet[2105]: E0213 19:56:05.335801 2105 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:56:05.366047 systemd[1]: Created slice kubepods-burstable-pod737d57d0fcea8c5e8fdc725ccc0ae59e.slice - libcontainer container kubepods-burstable-pod737d57d0fcea8c5e8fdc725ccc0ae59e.slice. Feb 13 19:56:05.380235 kubelet[2105]: E0213 19:56:05.379967 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:05.381913 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:56:05.396201 kubelet[2105]: E0213 19:56:05.396150 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:05.398647 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:56:05.400215 kubelet[2105]: E0213 19:56:05.400149 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:05.436178 kubelet[2105]: I0213 19:56:05.436158 2105 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:56:05.436684 kubelet[2105]: E0213 19:56:05.436647 2105 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Feb 13 19:56:05.444017 kubelet[2105]: I0213 19:56:05.443991 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:05.444070 kubelet[2105]: I0213 19:56:05.444019 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:56:05.444070 kubelet[2105]: I0213 19:56:05.444037 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:05.444070 kubelet[2105]: I0213 19:56:05.444053 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:05.444070 kubelet[2105]: I0213 19:56:05.444068 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:05.444168 kubelet[2105]: I0213 19:56:05.444082 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:05.444168 kubelet[2105]: I0213 19:56:05.444097 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:05.444168 kubelet[2105]: I0213 19:56:05.444117 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:05.444168 kubelet[2105]: I0213 19:56:05.444133 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:05.637785 kubelet[2105]: I0213 19:56:05.637718 2105 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:56:05.638272 kubelet[2105]: E0213 19:56:05.638234 2105 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Feb 13 19:56:05.644705 kubelet[2105]: E0213 19:56:05.644678 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="800ms" Feb 13 19:56:05.681109 kubelet[2105]: E0213 19:56:05.681012 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:05.681783 containerd[1435]: time="2025-02-13T19:56:05.681698586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:737d57d0fcea8c5e8fdc725ccc0ae59e,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:05.696924 kubelet[2105]: E0213 19:56:05.696899 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:05.697509 containerd[1435]: time="2025-02-13T19:56:05.697294973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:05.701165 kubelet[2105]: E0213 19:56:05.701139 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:05.701558 containerd[1435]: time="2025-02-13T19:56:05.701443103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:05.881462 kubelet[2105]: W0213 19:56:05.881403 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.881462 kubelet[2105]: E0213 19:56:05.881449 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:05.961070 kubelet[2105]: W0213 19:56:05.960892 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:05.961070 kubelet[2105]: E0213 19:56:05.960958 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:06.036512 kubelet[2105]: W0213 19:56:06.036445 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:06.036512 kubelet[2105]: E0213 19:56:06.036517 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:06.039720 kubelet[2105]: I0213 19:56:06.039686 2105 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:56:06.040020 kubelet[2105]: E0213 19:56:06.039971 2105 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Feb 13 19:56:06.229803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685986608.mount: Deactivated successfully. Feb 13 19:56:06.240259 containerd[1435]: time="2025-02-13T19:56:06.239740233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:56:06.241285 containerd[1435]: time="2025-02-13T19:56:06.241229190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:56:06.241899 containerd[1435]: time="2025-02-13T19:56:06.241816929Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:56:06.243526 containerd[1435]: time="2025-02-13T19:56:06.242757287Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:56:06.243526 containerd[1435]: time="2025-02-13T19:56:06.242987614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:56:06.244005 containerd[1435]: time="2025-02-13T19:56:06.243924857Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:56:06.245228 containerd[1435]: time="2025-02-13T19:56:06.245145466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:56:06.246766 containerd[1435]: time="2025-02-13T19:56:06.246725923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:56:06.249320 containerd[1435]: time="2025-02-13T19:56:06.249265110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.489975ms" Feb 13 19:56:06.254673 containerd[1435]: time="2025-02-13T19:56:06.254514262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.010303ms" Feb 13 19:56:06.255177 containerd[1435]: time="2025-02-13T19:56:06.255148050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.784795ms" Feb 13 19:56:06.394398 containerd[1435]: time="2025-02-13T19:56:06.393958791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:06.394398 containerd[1435]: time="2025-02-13T19:56:06.394024371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:06.394398 containerd[1435]: time="2025-02-13T19:56:06.394035314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.395094291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.395137345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.395162546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.395293705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.394663751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.394834848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.394853060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.395494 containerd[1435]: time="2025-02-13T19:56:06.395170414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.395957 containerd[1435]: time="2025-02-13T19:56:06.395906046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:06.422447 systemd[1]: Started cri-containerd-08312fa8149b8eeb9a87d24ea1bb8dfb39dbb9128ef2b706648b56dc73fd9f38.scope - libcontainer container 08312fa8149b8eeb9a87d24ea1bb8dfb39dbb9128ef2b706648b56dc73fd9f38. Feb 13 19:56:06.423660 systemd[1]: Started cri-containerd-6efd4d217f2aa2a4f4c8542671a52c8f3d77b0c7c48f98044a5e3bcc8a270a52.scope - libcontainer container 6efd4d217f2aa2a4f4c8542671a52c8f3d77b0c7c48f98044a5e3bcc8a270a52. Feb 13 19:56:06.424947 systemd[1]: Started cri-containerd-7c1ad5ee8cab8232d83ca8596448af5f06dfafcd9ff3cf3930c00a08aad3f5ad.scope - libcontainer container 7c1ad5ee8cab8232d83ca8596448af5f06dfafcd9ff3cf3930c00a08aad3f5ad. Feb 13 19:56:06.445773 kubelet[2105]: E0213 19:56:06.445738 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="1.6s" Feb 13 19:56:06.449040 kubelet[2105]: W0213 19:56:06.448945 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Feb 13 19:56:06.449040 kubelet[2105]: E0213 19:56:06.449009 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:56:06.457745 containerd[1435]: time="2025-02-13T19:56:06.456043486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"6efd4d217f2aa2a4f4c8542671a52c8f3d77b0c7c48f98044a5e3bcc8a270a52\"" Feb 13 19:56:06.458661 kubelet[2105]: E0213 19:56:06.458635 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:06.460235 containerd[1435]: time="2025-02-13T19:56:06.460196319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:737d57d0fcea8c5e8fdc725ccc0ae59e,Namespace:kube-system,Attempt:0,} returns sandbox id \"08312fa8149b8eeb9a87d24ea1bb8dfb39dbb9128ef2b706648b56dc73fd9f38\"" Feb 13 19:56:06.460919 kubelet[2105]: E0213 19:56:06.460896 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:06.461569 containerd[1435]: time="2025-02-13T19:56:06.461532670Z" level=info msg="CreateContainer within sandbox \"6efd4d217f2aa2a4f4c8542671a52c8f3d77b0c7c48f98044a5e3bcc8a270a52\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:56:06.463601 containerd[1435]: time="2025-02-13T19:56:06.463295128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1ad5ee8cab8232d83ca8596448af5f06dfafcd9ff3cf3930c00a08aad3f5ad\"" Feb 13 19:56:06.463952 containerd[1435]: time="2025-02-13T19:56:06.463911463Z" level=info msg="CreateContainer within sandbox \"08312fa8149b8eeb9a87d24ea1bb8dfb39dbb9128ef2b706648b56dc73fd9f38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:56:06.464285 kubelet[2105]: E0213 19:56:06.464263 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:06.466056 containerd[1435]: time="2025-02-13T19:56:06.466018393Z" level=info msg="CreateContainer within sandbox \"7c1ad5ee8cab8232d83ca8596448af5f06dfafcd9ff3cf3930c00a08aad3f5ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:56:06.476023 containerd[1435]: time="2025-02-13T19:56:06.475978682Z" level=info msg="CreateContainer within sandbox \"6efd4d217f2aa2a4f4c8542671a52c8f3d77b0c7c48f98044a5e3bcc8a270a52\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"246ebe7ea632fbdbc371bd959af840db06833aac08122fa93b98efe545fcaa42\"" Feb 13 19:56:06.476534 containerd[1435]: time="2025-02-13T19:56:06.476503797Z" level=info msg="StartContainer for \"246ebe7ea632fbdbc371bd959af840db06833aac08122fa93b98efe545fcaa42\"" Feb 13 19:56:06.486279 containerd[1435]: time="2025-02-13T19:56:06.485570936Z" level=info msg="CreateContainer within sandbox \"08312fa8149b8eeb9a87d24ea1bb8dfb39dbb9128ef2b706648b56dc73fd9f38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8393dccd3bdd66d9f24266ee308ada8ec0941d9c94c065a883fd5ce1cb9b68d\"" Feb 13 19:56:06.487395 containerd[1435]: time="2025-02-13T19:56:06.487360352Z" level=info msg="StartContainer for \"a8393dccd3bdd66d9f24266ee308ada8ec0941d9c94c065a883fd5ce1cb9b68d\"" Feb 13 19:56:06.490584 containerd[1435]: time="2025-02-13T19:56:06.490478811Z" level=info msg="CreateContainer within sandbox \"7c1ad5ee8cab8232d83ca8596448af5f06dfafcd9ff3cf3930c00a08aad3f5ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9236ad6a0ee0ba464cfa4831fdc30b6b44c9d295408ff25a37194327574af31f\"" Feb 13 19:56:06.491811 containerd[1435]: time="2025-02-13T19:56:06.491722824Z" level=info msg="StartContainer for \"9236ad6a0ee0ba464cfa4831fdc30b6b44c9d295408ff25a37194327574af31f\"" Feb 13 19:56:06.504381 systemd[1]: Started cri-containerd-246ebe7ea632fbdbc371bd959af840db06833aac08122fa93b98efe545fcaa42.scope - libcontainer container 246ebe7ea632fbdbc371bd959af840db06833aac08122fa93b98efe545fcaa42. Feb 13 19:56:06.518440 systemd[1]: Started cri-containerd-a8393dccd3bdd66d9f24266ee308ada8ec0941d9c94c065a883fd5ce1cb9b68d.scope - libcontainer container a8393dccd3bdd66d9f24266ee308ada8ec0941d9c94c065a883fd5ce1cb9b68d. Feb 13 19:56:06.521468 systemd[1]: Started cri-containerd-9236ad6a0ee0ba464cfa4831fdc30b6b44c9d295408ff25a37194327574af31f.scope - libcontainer container 9236ad6a0ee0ba464cfa4831fdc30b6b44c9d295408ff25a37194327574af31f. Feb 13 19:56:06.556239 containerd[1435]: time="2025-02-13T19:56:06.554791490Z" level=info msg="StartContainer for \"246ebe7ea632fbdbc371bd959af840db06833aac08122fa93b98efe545fcaa42\" returns successfully" Feb 13 19:56:06.594737 containerd[1435]: time="2025-02-13T19:56:06.594669111Z" level=info msg="StartContainer for \"a8393dccd3bdd66d9f24266ee308ada8ec0941d9c94c065a883fd5ce1cb9b68d\" returns successfully" Feb 13 19:56:06.594918 containerd[1435]: time="2025-02-13T19:56:06.594689480Z" level=info msg="StartContainer for \"9236ad6a0ee0ba464cfa4831fdc30b6b44c9d295408ff25a37194327574af31f\" returns successfully" Feb 13 19:56:06.844200 kubelet[2105]: I0213 19:56:06.844077 2105 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:56:07.065305 kubelet[2105]: E0213 19:56:07.065140 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:07.065305 kubelet[2105]: E0213 19:56:07.065289 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:07.067039 kubelet[2105]: E0213 19:56:07.066829 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:07.067039 kubelet[2105]: E0213 19:56:07.066921 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:07.069339 kubelet[2105]: E0213 19:56:07.069098 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:07.069339 kubelet[2105]: E0213 19:56:07.069226 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:08.031316 kubelet[2105]: I0213 19:56:08.031275 2105 apiserver.go:52] "Watching apiserver" Feb 13 19:56:08.043291 kubelet[2105]: I0213 19:56:08.043230 2105 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:56:08.056997 kubelet[2105]: E0213 19:56:08.056969 2105 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:56:08.074449 kubelet[2105]: E0213 19:56:08.074241 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:08.074449 kubelet[2105]: E0213 19:56:08.074329 2105 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:56:08.074449 kubelet[2105]: E0213 19:56:08.074358 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:08.075254 kubelet[2105]: E0213 19:56:08.075230 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:08.075852 kubelet[2105]: I0213 19:56:08.075814 2105 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:56:08.142708 kubelet[2105]: I0213 19:56:08.142434 2105 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:08.147842 kubelet[2105]: E0213 19:56:08.147805 2105 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:08.147842 kubelet[2105]: I0213 19:56:08.147837 2105 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:56:08.149781 kubelet[2105]: E0213 19:56:08.149749 2105 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:56:08.149781 kubelet[2105]: I0213 19:56:08.149780 2105 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:08.151245 kubelet[2105]: E0213 19:56:08.151216 2105 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:08.411755 kubelet[2105]: I0213 19:56:08.411548 2105 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:08.416195 kubelet[2105]: E0213 19:56:08.414808 2105 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:08.416195 kubelet[2105]: E0213 19:56:08.414943 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:09.983225 systemd[1]: Reloading requested from client PID 2386 ('systemctl') (unit session-7.scope)... Feb 13 19:56:09.983242 systemd[1]: Reloading... Feb 13 19:56:10.050228 zram_generator::config[2428]: No configuration found. Feb 13 19:56:10.148601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:10.219357 systemd[1]: Reloading finished in 235 ms. Feb 13 19:56:10.251486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:10.261551 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:56:10.261788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:10.271556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:10.363814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:10.367879 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:56:10.406877 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:56:10.406877 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:56:10.406877 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:56:10.407272 kubelet[2467]: I0213 19:56:10.406961 2467 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:56:10.412576 kubelet[2467]: I0213 19:56:10.412537 2467 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:56:10.412576 kubelet[2467]: I0213 19:56:10.412566 2467 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:56:10.412812 kubelet[2467]: I0213 19:56:10.412787 2467 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:56:10.413928 kubelet[2467]: I0213 19:56:10.413902 2467 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:56:10.415959 kubelet[2467]: I0213 19:56:10.415916 2467 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:56:10.419862 kubelet[2467]: E0213 19:56:10.419454 2467 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:56:10.419862 kubelet[2467]: I0213 19:56:10.419484 2467 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:56:10.422074 kubelet[2467]: I0213 19:56:10.422028 2467 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:56:10.422286 kubelet[2467]: I0213 19:56:10.422250 2467 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:56:10.422431 kubelet[2467]: I0213 19:56:10.422279 2467 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:56:10.422511 kubelet[2467]: I0213 19:56:10.422435 2467 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:56:10.422511 kubelet[2467]: I0213 19:56:10.422444 2467 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:56:10.422511 kubelet[2467]: I0213 19:56:10.422482 2467 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:56:10.423438 kubelet[2467]: I0213 19:56:10.422602 2467 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:56:10.423438 kubelet[2467]: I0213 19:56:10.422617 2467 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:56:10.423438 kubelet[2467]: I0213 19:56:10.422636 2467 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:56:10.423438 kubelet[2467]: I0213 19:56:10.422645 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:56:10.426430 kubelet[2467]: I0213 19:56:10.426398 2467 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:56:10.426856 kubelet[2467]: I0213 19:56:10.426830 2467 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:56:10.427263 kubelet[2467]: I0213 19:56:10.427245 2467 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:56:10.427306 kubelet[2467]: I0213 19:56:10.427279 2467 server.go:1287] "Started kubelet" Feb 13 19:56:10.427589 kubelet[2467]: I0213 19:56:10.427556 2467 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.428352 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.428620 2467 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.428959 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.429181 2467 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.429571 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:56:10.433562 kubelet[2467]: E0213 19:56:10.433467 2467 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:56:10.433562 kubelet[2467]: I0213 19:56:10.433506 2467 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:56:10.433758 kubelet[2467]: I0213 19:56:10.433651 2467 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:56:10.433758 kubelet[2467]: I0213 19:56:10.433756 2467 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:56:10.437739 kubelet[2467]: I0213 19:56:10.435813 2467 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:56:10.437969 kubelet[2467]: I0213 19:56:10.437941 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:56:10.444378 kubelet[2467]: I0213 19:56:10.444333 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:56:10.446505 kubelet[2467]: I0213 19:56:10.446477 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:56:10.446581 kubelet[2467]: I0213 19:56:10.446515 2467 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:56:10.446581 kubelet[2467]: I0213 19:56:10.446534 2467 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:56:10.446581 kubelet[2467]: I0213 19:56:10.446540 2467 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:56:10.446649 kubelet[2467]: E0213 19:56:10.446596 2467 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:56:10.448616 kubelet[2467]: I0213 19:56:10.448598 2467 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:56:10.453520 kubelet[2467]: E0213 19:56:10.453488 2467 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:56:10.481591 kubelet[2467]: I0213 19:56:10.481562 2467 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:56:10.481591 kubelet[2467]: I0213 19:56:10.481584 2467 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:56:10.481721 kubelet[2467]: I0213 19:56:10.481605 2467 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:56:10.481779 kubelet[2467]: I0213 19:56:10.481756 2467 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:56:10.481813 kubelet[2467]: I0213 19:56:10.481775 2467 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:56:10.481813 kubelet[2467]: I0213 19:56:10.481793 2467 policy_none.go:49] "None policy: Start" Feb 13 19:56:10.481813 kubelet[2467]: I0213 19:56:10.481801 2467 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:56:10.481813 kubelet[2467]: I0213 19:56:10.481809 2467 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:56:10.481924 kubelet[2467]: I0213 19:56:10.481907 2467 state_mem.go:75] "Updated machine memory state" Feb 13 19:56:10.485496 kubelet[2467]: I0213 19:56:10.485471 2467 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:56:10.485638 kubelet[2467]: I0213 19:56:10.485621 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:56:10.485672 kubelet[2467]: I0213 19:56:10.485638 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:56:10.486246 kubelet[2467]: I0213 19:56:10.486223 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:56:10.487041 kubelet[2467]: E0213 19:56:10.487007 2467 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:56:10.548679 kubelet[2467]: I0213 19:56:10.547456 2467 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:56:10.548679 kubelet[2467]: I0213 19:56:10.547500 2467 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.548679 kubelet[2467]: I0213 19:56:10.547560 2467 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:10.590435 kubelet[2467]: I0213 19:56:10.590391 2467 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:56:10.595798 kubelet[2467]: I0213 19:56:10.595742 2467 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:56:10.595929 kubelet[2467]: I0213 19:56:10.595840 2467 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:56:10.735260 kubelet[2467]: I0213 19:56:10.735218 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.735348 kubelet[2467]: I0213 19:56:10.735329 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:56:10.735401 kubelet[2467]: I0213 19:56:10.735354 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:10.735423 kubelet[2467]: I0213 19:56:10.735370 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:10.735446 kubelet[2467]: I0213 19:56:10.735429 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.735469 kubelet[2467]: I0213 19:56:10.735445 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.735509 kubelet[2467]: I0213 19:56:10.735488 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.735535 kubelet[2467]: I0213 19:56:10.735512 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737d57d0fcea8c5e8fdc725ccc0ae59e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"737d57d0fcea8c5e8fdc725ccc0ae59e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:10.735578 kubelet[2467]: I0213 19:56:10.735528 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:56:10.853209 kubelet[2467]: E0213 19:56:10.853028 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:10.853282 kubelet[2467]: E0213 19:56:10.853226 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:10.854268 kubelet[2467]: E0213 19:56:10.854231 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:10.982895 sudo[2507]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:56:10.983169 sudo[2507]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:56:11.408544 sudo[2507]: pam_unix(sudo:session): session closed for user root Feb 13 19:56:11.423279 kubelet[2467]: I0213 19:56:11.423242 2467 apiserver.go:52] "Watching apiserver" Feb 13 19:56:11.434725 kubelet[2467]: I0213 19:56:11.434695 2467 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:56:11.461959 kubelet[2467]: E0213 19:56:11.461928 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:11.462703 kubelet[2467]: E0213 19:56:11.462670 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:11.462851 kubelet[2467]: I0213 19:56:11.462827 2467 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:11.468428 kubelet[2467]: E0213 19:56:11.468377 2467 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:56:11.468521 kubelet[2467]: E0213 19:56:11.468499 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:11.492091 kubelet[2467]: I0213 19:56:11.492016 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.492002008 podStartE2EDuration="1.492002008s" podCreationTimestamp="2025-02-13 19:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:11.491826186 +0000 UTC m=+1.120692110" watchObservedRunningTime="2025-02-13 19:56:11.492002008 +0000 UTC m=+1.120867932" Feb 13 19:56:11.492231 kubelet[2467]: I0213 19:56:11.492121 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.492115839 podStartE2EDuration="1.492115839s" podCreationTimestamp="2025-02-13 19:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:11.484728004 +0000 UTC m=+1.113593928" watchObservedRunningTime="2025-02-13 19:56:11.492115839 +0000 UTC m=+1.120981763" Feb 13 19:56:12.463440 kubelet[2467]: E0213 19:56:12.463145 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:12.463440 kubelet[2467]: E0213 19:56:12.463219 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:12.855215 kubelet[2467]: E0213 19:56:12.855052 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:13.437000 sudo[1615]: pam_unix(sudo:session): session closed for user root Feb 13 19:56:13.438577 sshd[1612]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:13.440994 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:48468.service: Deactivated successfully. Feb 13 19:56:13.443118 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:56:13.443623 systemd[1]: session-7.scope: Consumed 6.674s CPU time, 155.9M memory peak, 0B memory swap peak. Feb 13 19:56:13.445051 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:56:13.446031 systemd-logind[1418]: Removed session 7. Feb 13 19:56:13.464328 kubelet[2467]: E0213 19:56:13.464231 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:13.465538 kubelet[2467]: E0213 19:56:13.464479 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:15.813855 kubelet[2467]: I0213 19:56:15.813804 2467 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:56:15.814229 containerd[1435]: time="2025-02-13T19:56:15.814079420Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:56:15.814415 kubelet[2467]: I0213 19:56:15.814264 2467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:56:16.520267 kubelet[2467]: I0213 19:56:16.520175 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.518770799 podStartE2EDuration="6.518770799s" podCreationTimestamp="2025-02-13 19:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:11.4998779 +0000 UTC m=+1.128743824" watchObservedRunningTime="2025-02-13 19:56:16.518770799 +0000 UTC m=+6.147636723" Feb 13 19:56:16.533855 systemd[1]: Created slice kubepods-besteffort-podca7517e2_e359_4c28_aed5_738b468d1f6d.slice - libcontainer container kubepods-besteffort-podca7517e2_e359_4c28_aed5_738b468d1f6d.slice. Feb 13 19:56:16.548054 systemd[1]: Created slice kubepods-burstable-podd4ae7822_ba56_4662_8bca_13f47dbe7eed.slice - libcontainer container kubepods-burstable-podd4ae7822_ba56_4662_8bca_13f47dbe7eed.slice. Feb 13 19:56:16.574607 kubelet[2467]: I0213 19:56:16.574502 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hubble-tls\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574713 kubelet[2467]: I0213 19:56:16.574628 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-kernel\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574713 kubelet[2467]: I0213 19:56:16.574653 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4ae7822-ba56-4662-8bca-13f47dbe7eed-clustermesh-secrets\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574713 kubelet[2467]: I0213 19:56:16.574668 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-cgroup\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574713 kubelet[2467]: I0213 19:56:16.574682 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cni-path\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574719 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-bpf-maps\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574738 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hostproc\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574752 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-lib-modules\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574768 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssmn\" (UniqueName: \"kubernetes.io/projected/ca7517e2-e359-4c28-aed5-738b468d1f6d-kube-api-access-fssmn\") pod \"kube-proxy-tlzxp\" (UID: \"ca7517e2-e359-4c28-aed5-738b468d1f6d\") " pod="kube-system/kube-proxy-tlzxp" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574795 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-xtables-lock\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574826 kubelet[2467]: I0213 19:56:16.574811 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-net\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574941 kubelet[2467]: I0213 19:56:16.574825 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca7517e2-e359-4c28-aed5-738b468d1f6d-kube-proxy\") pod \"kube-proxy-tlzxp\" (UID: \"ca7517e2-e359-4c28-aed5-738b468d1f6d\") " pod="kube-system/kube-proxy-tlzxp" Feb 13 19:56:16.574941 kubelet[2467]: I0213 19:56:16.574840 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7517e2-e359-4c28-aed5-738b468d1f6d-lib-modules\") pod \"kube-proxy-tlzxp\" (UID: \"ca7517e2-e359-4c28-aed5-738b468d1f6d\") " pod="kube-system/kube-proxy-tlzxp" Feb 13 19:56:16.574941 kubelet[2467]: I0213 19:56:16.574865 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-config-path\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574941 kubelet[2467]: I0213 19:56:16.574882 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tst64\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-kube-api-access-tst64\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.574941 kubelet[2467]: I0213 19:56:16.574898 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7517e2-e359-4c28-aed5-738b468d1f6d-xtables-lock\") pod \"kube-proxy-tlzxp\" (UID: \"ca7517e2-e359-4c28-aed5-738b468d1f6d\") " pod="kube-system/kube-proxy-tlzxp" Feb 13 19:56:16.575040 kubelet[2467]: I0213 19:56:16.574912 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-run\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.575040 kubelet[2467]: I0213 19:56:16.574927 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-etc-cni-netd\") pod \"cilium-ttzg2\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " pod="kube-system/cilium-ttzg2" Feb 13 19:56:16.842204 kubelet[2467]: E0213 19:56:16.842076 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:16.843001 containerd[1435]: time="2025-02-13T19:56:16.842667056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlzxp,Uid:ca7517e2-e359-4c28-aed5-738b468d1f6d,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:16.851733 kubelet[2467]: E0213 19:56:16.851700 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:16.853748 containerd[1435]: time="2025-02-13T19:56:16.853706972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttzg2,Uid:d4ae7822-ba56-4662-8bca-13f47dbe7eed,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:16.864010 containerd[1435]: time="2025-02-13T19:56:16.863613443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:16.864010 containerd[1435]: time="2025-02-13T19:56:16.863670502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:16.864010 containerd[1435]: time="2025-02-13T19:56:16.863686907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:16.864010 containerd[1435]: time="2025-02-13T19:56:16.863761531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:16.873112 containerd[1435]: time="2025-02-13T19:56:16.873029756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:16.873591 containerd[1435]: time="2025-02-13T19:56:16.873538680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:16.873591 containerd[1435]: time="2025-02-13T19:56:16.873566409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:16.873770 containerd[1435]: time="2025-02-13T19:56:16.873686728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:16.881354 systemd[1]: Started cri-containerd-9b7e7276829a447134fba8b13eb9eb5d55dd576e90b9b0ebbb1ade7a1cf4a8ff.scope - libcontainer container 9b7e7276829a447134fba8b13eb9eb5d55dd576e90b9b0ebbb1ade7a1cf4a8ff. Feb 13 19:56:16.887063 systemd[1]: Started cri-containerd-1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9.scope - libcontainer container 1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9. Feb 13 19:56:16.905359 containerd[1435]: time="2025-02-13T19:56:16.905268461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlzxp,Uid:ca7517e2-e359-4c28-aed5-738b468d1f6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7e7276829a447134fba8b13eb9eb5d55dd576e90b9b0ebbb1ade7a1cf4a8ff\"" Feb 13 19:56:16.907065 kubelet[2467]: E0213 19:56:16.907039 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:16.910229 containerd[1435]: time="2025-02-13T19:56:16.910141511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttzg2,Uid:d4ae7822-ba56-4662-8bca-13f47dbe7eed,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\"" Feb 13 19:56:16.911136 kubelet[2467]: E0213 19:56:16.910769 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:16.911225 containerd[1435]: time="2025-02-13T19:56:16.911041361Z" level=info msg="CreateContainer within sandbox \"9b7e7276829a447134fba8b13eb9eb5d55dd576e90b9b0ebbb1ade7a1cf4a8ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:56:16.911981 containerd[1435]: time="2025-02-13T19:56:16.911632632Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:56:16.941127 containerd[1435]: time="2025-02-13T19:56:16.940387214Z" level=info msg="CreateContainer within sandbox \"9b7e7276829a447134fba8b13eb9eb5d55dd576e90b9b0ebbb1ade7a1cf4a8ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f97fff4e2acc1d3e233ec67b363356f3175aee3f427f80572fb7e84d1840f4ec\"" Feb 13 19:56:16.941685 systemd[1]: Created slice kubepods-besteffort-podd9943eba_7ca1_4f12_a782_2729c46c8bb0.slice - libcontainer container kubepods-besteffort-podd9943eba_7ca1_4f12_a782_2729c46c8bb0.slice. Feb 13 19:56:16.942232 containerd[1435]: time="2025-02-13T19:56:16.942203399Z" level=info msg="StartContainer for \"f97fff4e2acc1d3e233ec67b363356f3175aee3f427f80572fb7e84d1840f4ec\"" Feb 13 19:56:16.967423 systemd[1]: Started cri-containerd-f97fff4e2acc1d3e233ec67b363356f3175aee3f427f80572fb7e84d1840f4ec.scope - libcontainer container f97fff4e2acc1d3e233ec67b363356f3175aee3f427f80572fb7e84d1840f4ec. Feb 13 19:56:16.978446 kubelet[2467]: I0213 19:56:16.978393 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9943eba-7ca1-4f12-a782-2729c46c8bb0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l729s\" (UID: \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\") " pod="kube-system/cilium-operator-6c4d7847fc-l729s" Feb 13 19:56:16.978446 kubelet[2467]: I0213 19:56:16.978439 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxxx\" (UniqueName: \"kubernetes.io/projected/d9943eba-7ca1-4f12-a782-2729c46c8bb0-kube-api-access-7xxxx\") pod \"cilium-operator-6c4d7847fc-l729s\" (UID: \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\") " pod="kube-system/cilium-operator-6c4d7847fc-l729s" Feb 13 19:56:16.991996 containerd[1435]: time="2025-02-13T19:56:16.991959587Z" level=info msg="StartContainer for \"f97fff4e2acc1d3e233ec67b363356f3175aee3f427f80572fb7e84d1840f4ec\" returns successfully" Feb 13 19:56:17.243979 kubelet[2467]: E0213 19:56:17.243942 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:17.244634 containerd[1435]: time="2025-02-13T19:56:17.244423659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l729s,Uid:d9943eba-7ca1-4f12-a782-2729c46c8bb0,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:17.278417 containerd[1435]: time="2025-02-13T19:56:17.278329314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:17.278417 containerd[1435]: time="2025-02-13T19:56:17.278380690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:17.278417 containerd[1435]: time="2025-02-13T19:56:17.278394534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:17.278650 containerd[1435]: time="2025-02-13T19:56:17.278474638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:17.298356 systemd[1]: Started cri-containerd-3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78.scope - libcontainer container 3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78. Feb 13 19:56:17.326624 containerd[1435]: time="2025-02-13T19:56:17.326483072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l729s,Uid:d9943eba-7ca1-4f12-a782-2729c46c8bb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\"" Feb 13 19:56:17.327416 kubelet[2467]: E0213 19:56:17.327390 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:17.473705 kubelet[2467]: E0213 19:56:17.473664 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:17.481076 kubelet[2467]: I0213 19:56:17.481023 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlzxp" podStartSLOduration=1.481008412 podStartE2EDuration="1.481008412s" podCreationTimestamp="2025-02-13 19:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:17.480982044 +0000 UTC m=+7.109847968" watchObservedRunningTime="2025-02-13 19:56:17.481008412 +0000 UTC m=+7.109874336" Feb 13 19:56:21.286809 kubelet[2467]: E0213 19:56:21.286393 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:21.482120 kubelet[2467]: E0213 19:56:21.482072 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:22.846865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960486788.mount: Deactivated successfully. Feb 13 19:56:22.872501 kubelet[2467]: E0213 19:56:22.872472 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:23.182628 kubelet[2467]: E0213 19:56:23.182582 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:23.486909 kubelet[2467]: E0213 19:56:23.486492 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:24.167838 containerd[1435]: time="2025-02-13T19:56:24.167780020Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:24.168799 containerd[1435]: time="2025-02-13T19:56:24.168752625Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:56:24.170286 containerd[1435]: time="2025-02-13T19:56:24.170167602Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:24.172270 containerd[1435]: time="2025-02-13T19:56:24.172226155Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.260556073s" Feb 13 19:56:24.172340 containerd[1435]: time="2025-02-13T19:56:24.172270245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:56:24.184355 containerd[1435]: time="2025-02-13T19:56:24.184319659Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:56:24.187920 containerd[1435]: time="2025-02-13T19:56:24.186342165Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:56:24.217614 containerd[1435]: time="2025-02-13T19:56:24.217572293Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\"" Feb 13 19:56:24.218211 containerd[1435]: time="2025-02-13T19:56:24.218156936Z" level=info msg="StartContainer for \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\"" Feb 13 19:56:24.247416 systemd[1]: Started cri-containerd-bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb.scope - libcontainer container bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb. Feb 13 19:56:24.268735 containerd[1435]: time="2025-02-13T19:56:24.268694606Z" level=info msg="StartContainer for \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\" returns successfully" Feb 13 19:56:24.323002 systemd[1]: cri-containerd-bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb.scope: Deactivated successfully. Feb 13 19:56:24.452011 containerd[1435]: time="2025-02-13T19:56:24.435707376Z" level=info msg="shim disconnected" id=bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb namespace=k8s.io Feb 13 19:56:24.452806 containerd[1435]: time="2025-02-13T19:56:24.452229291Z" level=warning msg="cleaning up after shim disconnected" id=bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb namespace=k8s.io Feb 13 19:56:24.452806 containerd[1435]: time="2025-02-13T19:56:24.452251455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:24.494241 kubelet[2467]: E0213 19:56:24.494139 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:24.496956 containerd[1435]: time="2025-02-13T19:56:24.496815909Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:56:24.508141 containerd[1435]: time="2025-02-13T19:56:24.508091961Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\"" Feb 13 19:56:24.509950 containerd[1435]: time="2025-02-13T19:56:24.509131379Z" level=info msg="StartContainer for \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\"" Feb 13 19:56:24.536399 systemd[1]: Started cri-containerd-a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459.scope - libcontainer container a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459. Feb 13 19:56:24.555522 containerd[1435]: time="2025-02-13T19:56:24.555471806Z" level=info msg="StartContainer for \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\" returns successfully" Feb 13 19:56:24.572491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:56:24.572752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:24.572821 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:24.579553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:24.579733 systemd[1]: cri-containerd-a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459.scope: Deactivated successfully. Feb 13 19:56:24.599875 containerd[1435]: time="2025-02-13T19:56:24.599808332Z" level=info msg="shim disconnected" id=a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459 namespace=k8s.io Feb 13 19:56:24.599875 containerd[1435]: time="2025-02-13T19:56:24.599860263Z" level=warning msg="cleaning up after shim disconnected" id=a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459 namespace=k8s.io Feb 13 19:56:24.599875 containerd[1435]: time="2025-02-13T19:56:24.599869465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:24.613171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:25.136157 update_engine[1420]: I20250213 19:56:25.136078 1420 update_attempter.cc:509] Updating boot flags... Feb 13 19:56:25.161229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3003) Feb 13 19:56:25.190275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3006) Feb 13 19:56:25.215422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb-rootfs.mount: Deactivated successfully. Feb 13 19:56:25.223267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3006) Feb 13 19:56:25.497706 kubelet[2467]: E0213 19:56:25.497348 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:25.500351 containerd[1435]: time="2025-02-13T19:56:25.500310942Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:56:25.523302 containerd[1435]: time="2025-02-13T19:56:25.523260932Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\"" Feb 13 19:56:25.524002 containerd[1435]: time="2025-02-13T19:56:25.523979956Z" level=info msg="StartContainer for \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\"" Feb 13 19:56:25.550488 systemd[1]: Started cri-containerd-5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00.scope - libcontainer container 5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00. Feb 13 19:56:25.575028 containerd[1435]: time="2025-02-13T19:56:25.574990758Z" level=info msg="StartContainer for \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\" returns successfully" Feb 13 19:56:25.591654 systemd[1]: cri-containerd-5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00.scope: Deactivated successfully. Feb 13 19:56:25.674203 containerd[1435]: time="2025-02-13T19:56:25.674133226Z" level=info msg="shim disconnected" id=5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00 namespace=k8s.io Feb 13 19:56:25.674203 containerd[1435]: time="2025-02-13T19:56:25.674181076Z" level=warning msg="cleaning up after shim disconnected" id=5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00 namespace=k8s.io Feb 13 19:56:25.674203 containerd[1435]: time="2025-02-13T19:56:25.674214002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:25.790793 containerd[1435]: time="2025-02-13T19:56:25.790670893Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:25.791863 containerd[1435]: time="2025-02-13T19:56:25.791821643Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:56:25.792919 containerd[1435]: time="2025-02-13T19:56:25.792875014Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:25.794072 containerd[1435]: time="2025-02-13T19:56:25.794013962Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.609658015s" Feb 13 19:56:25.794072 containerd[1435]: time="2025-02-13T19:56:25.794047049Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:56:25.796626 containerd[1435]: time="2025-02-13T19:56:25.796500779Z" level=info msg="CreateContainer within sandbox \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:56:25.807486 containerd[1435]: time="2025-02-13T19:56:25.807444768Z" level=info msg="CreateContainer within sandbox \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\"" Feb 13 19:56:25.807908 containerd[1435]: time="2025-02-13T19:56:25.807828685Z" level=info msg="StartContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\"" Feb 13 19:56:25.829365 systemd[1]: Started cri-containerd-dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6.scope - libcontainer container dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6. Feb 13 19:56:25.854281 containerd[1435]: time="2025-02-13T19:56:25.853928505Z" level=info msg="StartContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" returns successfully" Feb 13 19:56:26.216694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00-rootfs.mount: Deactivated successfully. Feb 13 19:56:26.510490 kubelet[2467]: E0213 19:56:26.510396 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:26.515248 kubelet[2467]: E0213 19:56:26.514118 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:26.525174 containerd[1435]: time="2025-02-13T19:56:26.525117613Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:56:26.543126 kubelet[2467]: I0213 19:56:26.543049 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l729s" podStartSLOduration=2.076330652 podStartE2EDuration="10.542989934s" podCreationTimestamp="2025-02-13 19:56:16 +0000 UTC" firstStartedPulling="2025-02-13 19:56:17.328330555 +0000 UTC m=+6.957196479" lastFinishedPulling="2025-02-13 19:56:25.794989837 +0000 UTC m=+15.423855761" observedRunningTime="2025-02-13 19:56:26.524251848 +0000 UTC m=+16.153117852" watchObservedRunningTime="2025-02-13 19:56:26.542989934 +0000 UTC m=+16.171855858" Feb 13 19:56:26.553568 containerd[1435]: time="2025-02-13T19:56:26.551308357Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\"" Feb 13 19:56:26.553568 containerd[1435]: time="2025-02-13T19:56:26.553423000Z" level=info msg="StartContainer for \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\"" Feb 13 19:56:26.584414 systemd[1]: Started cri-containerd-6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a.scope - libcontainer container 6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a. Feb 13 19:56:26.606143 systemd[1]: cri-containerd-6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a.scope: Deactivated successfully. Feb 13 19:56:26.608092 containerd[1435]: time="2025-02-13T19:56:26.608008787Z" level=info msg="StartContainer for \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\" returns successfully" Feb 13 19:56:26.638765 containerd[1435]: time="2025-02-13T19:56:26.638687866Z" level=info msg="shim disconnected" id=6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a namespace=k8s.io Feb 13 19:56:26.638765 containerd[1435]: time="2025-02-13T19:56:26.638752358Z" level=warning msg="cleaning up after shim disconnected" id=6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a namespace=k8s.io Feb 13 19:56:26.638765 containerd[1435]: time="2025-02-13T19:56:26.638761320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:27.215757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a-rootfs.mount: Deactivated successfully. Feb 13 19:56:27.526607 kubelet[2467]: E0213 19:56:27.526016 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:27.526607 kubelet[2467]: E0213 19:56:27.526152 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:27.535694 containerd[1435]: time="2025-02-13T19:56:27.535330802Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:56:27.556347 containerd[1435]: time="2025-02-13T19:56:27.556289280Z" level=info msg="CreateContainer within sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\"" Feb 13 19:56:27.559139 containerd[1435]: time="2025-02-13T19:56:27.558313807Z" level=info msg="StartContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\"" Feb 13 19:56:27.583347 systemd[1]: Started cri-containerd-15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa.scope - libcontainer container 15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa. Feb 13 19:56:27.611547 containerd[1435]: time="2025-02-13T19:56:27.611500965Z" level=info msg="StartContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" returns successfully" Feb 13 19:56:27.699879 kubelet[2467]: I0213 19:56:27.699848 2467 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:56:27.747047 systemd[1]: Created slice kubepods-burstable-pod50c9bb6f_df25_41a3_9c46_90792f32af82.slice - libcontainer container kubepods-burstable-pod50c9bb6f_df25_41a3_9c46_90792f32af82.slice. Feb 13 19:56:27.753218 systemd[1]: Created slice kubepods-burstable-pod20c61fcc_6dfb_44f7_a39c_5c0e262f2cac.slice - libcontainer container kubepods-burstable-pod20c61fcc_6dfb_44f7_a39c_5c0e262f2cac.slice. Feb 13 19:56:27.851588 kubelet[2467]: I0213 19:56:27.851458 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqwk\" (UniqueName: \"kubernetes.io/projected/20c61fcc-6dfb-44f7-a39c-5c0e262f2cac-kube-api-access-4fqwk\") pod \"coredns-668d6bf9bc-sk2bg\" (UID: \"20c61fcc-6dfb-44f7-a39c-5c0e262f2cac\") " pod="kube-system/coredns-668d6bf9bc-sk2bg" Feb 13 19:56:27.851588 kubelet[2467]: I0213 19:56:27.851501 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50c9bb6f-df25-41a3-9c46-90792f32af82-config-volume\") pod \"coredns-668d6bf9bc-k9lgm\" (UID: \"50c9bb6f-df25-41a3-9c46-90792f32af82\") " pod="kube-system/coredns-668d6bf9bc-k9lgm" Feb 13 19:56:27.851588 kubelet[2467]: I0213 19:56:27.851523 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxfl\" (UniqueName: \"kubernetes.io/projected/50c9bb6f-df25-41a3-9c46-90792f32af82-kube-api-access-zfxfl\") pod \"coredns-668d6bf9bc-k9lgm\" (UID: \"50c9bb6f-df25-41a3-9c46-90792f32af82\") " pod="kube-system/coredns-668d6bf9bc-k9lgm" Feb 13 19:56:27.851588 kubelet[2467]: I0213 19:56:27.851544 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c61fcc-6dfb-44f7-a39c-5c0e262f2cac-config-volume\") pod \"coredns-668d6bf9bc-sk2bg\" (UID: \"20c61fcc-6dfb-44f7-a39c-5c0e262f2cac\") " pod="kube-system/coredns-668d6bf9bc-sk2bg" Feb 13 19:56:28.051222 kubelet[2467]: E0213 19:56:28.051051 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:28.051835 containerd[1435]: time="2025-02-13T19:56:28.051796043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9lgm,Uid:50c9bb6f-df25-41a3-9c46-90792f32af82,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:28.058178 kubelet[2467]: E0213 19:56:28.058145 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:28.059088 containerd[1435]: time="2025-02-13T19:56:28.059060338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sk2bg,Uid:20c61fcc-6dfb-44f7-a39c-5c0e262f2cac,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:28.530956 kubelet[2467]: E0213 19:56:28.530909 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:28.548490 kubelet[2467]: I0213 19:56:28.547741 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttzg2" podStartSLOduration=5.274841496 podStartE2EDuration="12.547725008s" podCreationTimestamp="2025-02-13 19:56:16 +0000 UTC" firstStartedPulling="2025-02-13 19:56:16.911248508 +0000 UTC m=+6.540114432" lastFinishedPulling="2025-02-13 19:56:24.18413202 +0000 UTC m=+13.812997944" observedRunningTime="2025-02-13 19:56:28.54611313 +0000 UTC m=+18.174979094" watchObservedRunningTime="2025-02-13 19:56:28.547725008 +0000 UTC m=+18.176590932" Feb 13 19:56:29.532312 kubelet[2467]: E0213 19:56:29.532206 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:29.890967 systemd-networkd[1374]: cilium_host: Link UP Feb 13 19:56:29.891101 systemd-networkd[1374]: cilium_net: Link UP Feb 13 19:56:29.891258 systemd-networkd[1374]: cilium_net: Gained carrier Feb 13 19:56:29.891403 systemd-networkd[1374]: cilium_host: Gained carrier Feb 13 19:56:29.973685 systemd-networkd[1374]: cilium_vxlan: Link UP Feb 13 19:56:29.973691 systemd-networkd[1374]: cilium_vxlan: Gained carrier Feb 13 19:56:30.267225 kernel: NET: Registered PF_ALG protocol family Feb 13 19:56:30.536159 kubelet[2467]: E0213 19:56:30.536112 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:30.833757 systemd-networkd[1374]: lxc_health: Link UP Feb 13 19:56:30.840948 systemd-networkd[1374]: lxc_health: Gained carrier Feb 13 19:56:30.854692 systemd-networkd[1374]: cilium_net: Gained IPv6LL Feb 13 19:56:30.918401 systemd-networkd[1374]: cilium_host: Gained IPv6LL Feb 13 19:56:31.261956 systemd-networkd[1374]: lxc72648ccba12b: Link UP Feb 13 19:56:31.272322 systemd-networkd[1374]: lxc6804f897760c: Link UP Feb 13 19:56:31.284215 kernel: eth0: renamed from tmp90239 Feb 13 19:56:31.296268 kernel: eth0: renamed from tmp75119 Feb 13 19:56:31.304672 systemd-networkd[1374]: lxc6804f897760c: Gained carrier Feb 13 19:56:31.307059 systemd-networkd[1374]: lxc72648ccba12b: Gained carrier Feb 13 19:56:31.537887 kubelet[2467]: E0213 19:56:31.537568 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:31.625272 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Feb 13 19:56:32.538264 kubelet[2467]: E0213 19:56:32.538138 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:32.902347 systemd-networkd[1374]: lxc_health: Gained IPv6LL Feb 13 19:56:33.286324 systemd-networkd[1374]: lxc6804f897760c: Gained IPv6LL Feb 13 19:56:33.350313 systemd-networkd[1374]: lxc72648ccba12b: Gained IPv6LL Feb 13 19:56:33.540157 kubelet[2467]: E0213 19:56:33.540062 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:34.732485 containerd[1435]: time="2025-02-13T19:56:34.732402375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:34.732485 containerd[1435]: time="2025-02-13T19:56:34.732464583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:34.733057 containerd[1435]: time="2025-02-13T19:56:34.732479625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:34.733057 containerd[1435]: time="2025-02-13T19:56:34.732555275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:34.735778 containerd[1435]: time="2025-02-13T19:56:34.735688527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:34.735926 containerd[1435]: time="2025-02-13T19:56:34.735884873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:34.739762 containerd[1435]: time="2025-02-13T19:56:34.736001769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:34.740292 containerd[1435]: time="2025-02-13T19:56:34.740097828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:34.769344 systemd[1]: Started cri-containerd-75119e62bbadfdc50bb084be356de0e3e9d1f447f0958a2611d73b8a9b3dbb74.scope - libcontainer container 75119e62bbadfdc50bb084be356de0e3e9d1f447f0958a2611d73b8a9b3dbb74. Feb 13 19:56:34.770483 systemd[1]: Started cri-containerd-9023957cae86c05eb6c84ca2b0e51e12bf9a6cc9d1c5c420d71816b78f954c23.scope - libcontainer container 9023957cae86c05eb6c84ca2b0e51e12bf9a6cc9d1c5c420d71816b78f954c23. Feb 13 19:56:34.780625 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:56:34.781951 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:56:34.799257 containerd[1435]: time="2025-02-13T19:56:34.798649817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9lgm,Uid:50c9bb6f-df25-41a3-9c46-90792f32af82,Namespace:kube-system,Attempt:0,} returns sandbox id \"75119e62bbadfdc50bb084be356de0e3e9d1f447f0958a2611d73b8a9b3dbb74\"" Feb 13 19:56:34.800021 kubelet[2467]: E0213 19:56:34.799990 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:34.801517 containerd[1435]: time="2025-02-13T19:56:34.801380097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sk2bg,Uid:20c61fcc-6dfb-44f7-a39c-5c0e262f2cac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9023957cae86c05eb6c84ca2b0e51e12bf9a6cc9d1c5c420d71816b78f954c23\"" Feb 13 19:56:34.802604 kubelet[2467]: E0213 19:56:34.802566 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:34.804444 containerd[1435]: time="2025-02-13T19:56:34.804399654Z" level=info msg="CreateContainer within sandbox \"75119e62bbadfdc50bb084be356de0e3e9d1f447f0958a2611d73b8a9b3dbb74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:56:34.806235 containerd[1435]: time="2025-02-13T19:56:34.806115680Z" level=info msg="CreateContainer within sandbox \"9023957cae86c05eb6c84ca2b0e51e12bf9a6cc9d1c5c420d71816b78f954c23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:56:34.821535 containerd[1435]: time="2025-02-13T19:56:34.821466101Z" level=info msg="CreateContainer within sandbox \"9023957cae86c05eb6c84ca2b0e51e12bf9a6cc9d1c5c420d71816b78f954c23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"345191921bbd062b9520e339d949f9a36eca02c340fcbfbc4d66dba708905095\"" Feb 13 19:56:34.822171 containerd[1435]: time="2025-02-13T19:56:34.822142070Z" level=info msg="StartContainer for \"345191921bbd062b9520e339d949f9a36eca02c340fcbfbc4d66dba708905095\"" Feb 13 19:56:34.823128 containerd[1435]: time="2025-02-13T19:56:34.823085354Z" level=info msg="CreateContainer within sandbox \"75119e62bbadfdc50bb084be356de0e3e9d1f447f0958a2611d73b8a9b3dbb74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ca75e19777bf6c803a1cdd8e1906debcd28e53c909f7de5105c4c90f680d69f\"" Feb 13 19:56:34.824280 containerd[1435]: time="2025-02-13T19:56:34.823488807Z" level=info msg="StartContainer for \"4ca75e19777bf6c803a1cdd8e1906debcd28e53c909f7de5105c4c90f680d69f\"" Feb 13 19:56:34.850346 systemd[1]: Started cri-containerd-345191921bbd062b9520e339d949f9a36eca02c340fcbfbc4d66dba708905095.scope - libcontainer container 345191921bbd062b9520e339d949f9a36eca02c340fcbfbc4d66dba708905095. Feb 13 19:56:34.853382 systemd[1]: Started cri-containerd-4ca75e19777bf6c803a1cdd8e1906debcd28e53c909f7de5105c4c90f680d69f.scope - libcontainer container 4ca75e19777bf6c803a1cdd8e1906debcd28e53c909f7de5105c4c90f680d69f. Feb 13 19:56:34.874991 containerd[1435]: time="2025-02-13T19:56:34.874942822Z" level=info msg="StartContainer for \"345191921bbd062b9520e339d949f9a36eca02c340fcbfbc4d66dba708905095\" returns successfully" Feb 13 19:56:34.880322 containerd[1435]: time="2025-02-13T19:56:34.880282925Z" level=info msg="StartContainer for \"4ca75e19777bf6c803a1cdd8e1906debcd28e53c909f7de5105c4c90f680d69f\" returns successfully" Feb 13 19:56:35.547456 kubelet[2467]: E0213 19:56:35.547090 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:35.550595 kubelet[2467]: E0213 19:56:35.550558 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:35.559903 kubelet[2467]: I0213 19:56:35.559843 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sk2bg" podStartSLOduration=19.559419715 podStartE2EDuration="19.559419715s" podCreationTimestamp="2025-02-13 19:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:35.558041341 +0000 UTC m=+25.186907265" watchObservedRunningTime="2025-02-13 19:56:35.559419715 +0000 UTC m=+25.188285639" Feb 13 19:56:35.581752 kubelet[2467]: I0213 19:56:35.581692 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k9lgm" podStartSLOduration=19.581674285 podStartE2EDuration="19.581674285s" podCreationTimestamp="2025-02-13 19:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:35.569086535 +0000 UTC m=+25.197952459" watchObservedRunningTime="2025-02-13 19:56:35.581674285 +0000 UTC m=+25.210540209" Feb 13 19:56:36.551737 kubelet[2467]: E0213 19:56:36.551645 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:36.551737 kubelet[2467]: E0213 19:56:36.551674 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:37.553709 kubelet[2467]: E0213 19:56:37.553671 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:37.554045 kubelet[2467]: E0213 19:56:37.553750 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:56:39.075026 systemd[1]: Started sshd@7-10.0.0.127:22-10.0.0.1:43978.service - OpenSSH per-connection server daemon (10.0.0.1:43978). Feb 13 19:56:39.114373 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:39.115745 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:39.119565 systemd-logind[1418]: New session 8 of user core. Feb 13 19:56:39.128402 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:56:39.260503 sshd[3870]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:39.264127 systemd[1]: sshd@7-10.0.0.127:22-10.0.0.1:43978.service: Deactivated successfully. Feb 13 19:56:39.266745 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:56:39.267531 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:56:39.268415 systemd-logind[1418]: Removed session 8. Feb 13 19:56:44.271880 systemd[1]: Started sshd@8-10.0.0.127:22-10.0.0.1:33708.service - OpenSSH per-connection server daemon (10.0.0.1:33708). Feb 13 19:56:44.309593 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 33708 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:44.310909 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:44.314960 systemd-logind[1418]: New session 9 of user core. Feb 13 19:56:44.326319 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:56:44.435131 sshd[3888]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:44.438232 systemd[1]: sshd@8-10.0.0.127:22-10.0.0.1:33708.service: Deactivated successfully. Feb 13 19:56:44.440886 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:56:44.441709 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:56:44.442448 systemd-logind[1418]: Removed session 9. Feb 13 19:56:49.446686 systemd[1]: Started sshd@9-10.0.0.127:22-10.0.0.1:33716.service - OpenSSH per-connection server daemon (10.0.0.1:33716). Feb 13 19:56:49.483999 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 33716 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:49.485139 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:49.488792 systemd-logind[1418]: New session 10 of user core. Feb 13 19:56:49.498379 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:56:49.609600 sshd[3906]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:49.612678 systemd[1]: sshd@9-10.0.0.127:22-10.0.0.1:33716.service: Deactivated successfully. Feb 13 19:56:49.614556 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:56:49.615227 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:56:49.616005 systemd-logind[1418]: Removed session 10. Feb 13 19:56:54.620796 systemd[1]: Started sshd@10-10.0.0.127:22-10.0.0.1:45378.service - OpenSSH per-connection server daemon (10.0.0.1:45378). Feb 13 19:56:54.655967 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 45378 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:54.657104 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:54.660423 systemd-logind[1418]: New session 11 of user core. Feb 13 19:56:54.670399 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:56:54.774792 sshd[3921]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:54.795667 systemd[1]: sshd@10-10.0.0.127:22-10.0.0.1:45378.service: Deactivated successfully. Feb 13 19:56:54.797323 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:56:54.798589 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:56:54.806684 systemd[1]: Started sshd@11-10.0.0.127:22-10.0.0.1:45392.service - OpenSSH per-connection server daemon (10.0.0.1:45392). Feb 13 19:56:54.808016 systemd-logind[1418]: Removed session 11. Feb 13 19:56:54.838588 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 45392 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:54.839772 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:54.843186 systemd-logind[1418]: New session 12 of user core. Feb 13 19:56:54.855351 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:56:54.999499 sshd[3937]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:55.009128 systemd[1]: sshd@11-10.0.0.127:22-10.0.0.1:45392.service: Deactivated successfully. Feb 13 19:56:55.014170 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:56:55.019345 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:56:55.029612 systemd[1]: Started sshd@12-10.0.0.127:22-10.0.0.1:45402.service - OpenSSH per-connection server daemon (10.0.0.1:45402). Feb 13 19:56:55.030640 systemd-logind[1418]: Removed session 12. Feb 13 19:56:55.063214 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:55.064170 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:55.068525 systemd-logind[1418]: New session 13 of user core. Feb 13 19:56:55.079361 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:56:55.188210 sshd[3949]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:55.192959 systemd[1]: sshd@12-10.0.0.127:22-10.0.0.1:45402.service: Deactivated successfully. Feb 13 19:56:55.194971 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:56:55.195728 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:56:55.196835 systemd-logind[1418]: Removed session 13. Feb 13 19:57:00.204027 systemd[1]: Started sshd@13-10.0.0.127:22-10.0.0.1:45410.service - OpenSSH per-connection server daemon (10.0.0.1:45410). Feb 13 19:57:00.240271 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 45410 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:00.240588 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:00.244310 systemd-logind[1418]: New session 14 of user core. Feb 13 19:57:00.254373 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:57:00.363962 sshd[3964]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:00.367059 systemd[1]: sshd@13-10.0.0.127:22-10.0.0.1:45410.service: Deactivated successfully. Feb 13 19:57:00.370748 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:57:00.371557 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:57:00.372480 systemd-logind[1418]: Removed session 14. Feb 13 19:57:05.374653 systemd[1]: Started sshd@14-10.0.0.127:22-10.0.0.1:35648.service - OpenSSH per-connection server daemon (10.0.0.1:35648). Feb 13 19:57:05.409861 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 35648 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:05.411059 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:05.416538 systemd-logind[1418]: New session 15 of user core. Feb 13 19:57:05.427617 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:57:05.553284 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:05.561670 systemd[1]: sshd@14-10.0.0.127:22-10.0.0.1:35648.service: Deactivated successfully. Feb 13 19:57:05.568399 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:57:05.569789 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:57:05.578746 systemd[1]: Started sshd@15-10.0.0.127:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). Feb 13 19:57:05.579934 systemd-logind[1418]: Removed session 15. Feb 13 19:57:05.610631 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:05.611917 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:05.615870 systemd-logind[1418]: New session 16 of user core. Feb 13 19:57:05.625333 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:57:05.859402 sshd[3992]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:05.867561 systemd[1]: sshd@15-10.0.0.127:22-10.0.0.1:35654.service: Deactivated successfully. Feb 13 19:57:05.870831 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:57:05.872283 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:57:05.879684 systemd[1]: Started sshd@16-10.0.0.127:22-10.0.0.1:35658.service - OpenSSH per-connection server daemon (10.0.0.1:35658). Feb 13 19:57:05.881132 systemd-logind[1418]: Removed session 16. Feb 13 19:57:05.915726 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:05.917095 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:05.920785 systemd-logind[1418]: New session 17 of user core. Feb 13 19:57:05.930425 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:57:06.666105 sshd[4005]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:06.676056 systemd[1]: sshd@16-10.0.0.127:22-10.0.0.1:35658.service: Deactivated successfully. Feb 13 19:57:06.678928 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:57:06.680324 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:57:06.690569 systemd[1]: Started sshd@17-10.0.0.127:22-10.0.0.1:35660.service - OpenSSH per-connection server daemon (10.0.0.1:35660). Feb 13 19:57:06.691435 systemd-logind[1418]: Removed session 17. Feb 13 19:57:06.724480 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 35660 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:06.725635 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:06.729716 systemd-logind[1418]: New session 18 of user core. Feb 13 19:57:06.735376 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:57:06.947575 sshd[4027]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:06.963336 systemd[1]: sshd@17-10.0.0.127:22-10.0.0.1:35660.service: Deactivated successfully. Feb 13 19:57:06.965192 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:57:06.966696 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:57:06.976546 systemd[1]: Started sshd@18-10.0.0.127:22-10.0.0.1:35674.service - OpenSSH per-connection server daemon (10.0.0.1:35674). Feb 13 19:57:06.979468 systemd-logind[1418]: Removed session 18. Feb 13 19:57:07.008383 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 35674 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:07.009754 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:07.013388 systemd-logind[1418]: New session 19 of user core. Feb 13 19:57:07.023333 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:57:07.132381 sshd[4040]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:07.135681 systemd[1]: sshd@18-10.0.0.127:22-10.0.0.1:35674.service: Deactivated successfully. Feb 13 19:57:07.137442 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:57:07.138714 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:57:07.139660 systemd-logind[1418]: Removed session 19. Feb 13 19:57:12.144137 systemd[1]: Started sshd@19-10.0.0.127:22-10.0.0.1:35684.service - OpenSSH per-connection server daemon (10.0.0.1:35684). Feb 13 19:57:12.180980 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 35684 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:12.182321 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:12.188487 systemd-logind[1418]: New session 20 of user core. Feb 13 19:57:12.202377 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:57:12.312449 sshd[4058]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:12.315845 systemd[1]: sshd@19-10.0.0.127:22-10.0.0.1:35684.service: Deactivated successfully. Feb 13 19:57:12.317648 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:57:12.318412 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:57:12.319308 systemd-logind[1418]: Removed session 20. Feb 13 19:57:17.322661 systemd[1]: Started sshd@20-10.0.0.127:22-10.0.0.1:43364.service - OpenSSH per-connection server daemon (10.0.0.1:43364). Feb 13 19:57:17.358059 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 43364 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:17.359341 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:17.362753 systemd-logind[1418]: New session 21 of user core. Feb 13 19:57:17.373362 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:57:17.479294 sshd[4074]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:17.482459 systemd[1]: sshd@20-10.0.0.127:22-10.0.0.1:43364.service: Deactivated successfully. Feb 13 19:57:17.484518 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:57:17.485331 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:57:17.486207 systemd-logind[1418]: Removed session 21. Feb 13 19:57:22.489744 systemd[1]: Started sshd@21-10.0.0.127:22-10.0.0.1:41352.service - OpenSSH per-connection server daemon (10.0.0.1:41352). Feb 13 19:57:22.525065 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 41352 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:22.526356 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:22.530243 systemd-logind[1418]: New session 22 of user core. Feb 13 19:57:22.541328 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:57:22.645256 sshd[4088]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:22.648400 systemd[1]: sshd@21-10.0.0.127:22-10.0.0.1:41352.service: Deactivated successfully. Feb 13 19:57:22.650138 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:57:22.650774 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:57:22.651557 systemd-logind[1418]: Removed session 22. Feb 13 19:57:24.450115 kubelet[2467]: E0213 19:57:24.449897 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:24.450115 kubelet[2467]: E0213 19:57:24.450047 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:27.655650 systemd[1]: Started sshd@22-10.0.0.127:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). Feb 13 19:57:27.690779 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:27.691965 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:27.695365 systemd-logind[1418]: New session 23 of user core. Feb 13 19:57:27.705337 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:57:27.807557 sshd[4103]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:27.821688 systemd[1]: sshd@22-10.0.0.127:22-10.0.0.1:41360.service: Deactivated successfully. Feb 13 19:57:27.823294 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:57:27.824629 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:57:27.830433 systemd[1]: Started sshd@23-10.0.0.127:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). Feb 13 19:57:27.831464 systemd-logind[1418]: Removed session 23. Feb 13 19:57:27.862522 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:27.864039 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:27.867508 systemd-logind[1418]: New session 24 of user core. Feb 13 19:57:27.878338 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:57:28.447859 kubelet[2467]: E0213 19:57:28.447810 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:30.745375 containerd[1435]: time="2025-02-13T19:57:30.745179687Z" level=info msg="StopContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" with timeout 30 (s)" Feb 13 19:57:30.750143 containerd[1435]: time="2025-02-13T19:57:30.750111778Z" level=info msg="Stop container \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" with signal terminated" Feb 13 19:57:30.760251 systemd[1]: cri-containerd-dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6.scope: Deactivated successfully. Feb 13 19:57:30.784493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6-rootfs.mount: Deactivated successfully. Feb 13 19:57:30.786154 containerd[1435]: time="2025-02-13T19:57:30.786107833Z" level=info msg="StopContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" with timeout 2 (s)" Feb 13 19:57:30.786616 containerd[1435]: time="2025-02-13T19:57:30.786582866Z" level=info msg="Stop container \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" with signal terminated" Feb 13 19:57:30.788658 containerd[1435]: time="2025-02-13T19:57:30.788439600Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:57:30.791573 containerd[1435]: time="2025-02-13T19:57:30.791507797Z" level=info msg="shim disconnected" id=dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6 namespace=k8s.io Feb 13 19:57:30.791573 containerd[1435]: time="2025-02-13T19:57:30.791564556Z" level=warning msg="cleaning up after shim disconnected" id=dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6 namespace=k8s.io Feb 13 19:57:30.791573 containerd[1435]: time="2025-02-13T19:57:30.791573476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:30.793793 systemd-networkd[1374]: lxc_health: Link DOWN Feb 13 19:57:30.793798 systemd-networkd[1374]: lxc_health: Lost carrier Feb 13 19:57:30.811443 systemd[1]: cri-containerd-15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa.scope: Deactivated successfully. Feb 13 19:57:30.811762 systemd[1]: cri-containerd-15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa.scope: Consumed 6.391s CPU time. Feb 13 19:57:30.831551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa-rootfs.mount: Deactivated successfully. Feb 13 19:57:30.838023 containerd[1435]: time="2025-02-13T19:57:30.837961665Z" level=info msg="shim disconnected" id=15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa namespace=k8s.io Feb 13 19:57:30.838023 containerd[1435]: time="2025-02-13T19:57:30.838011905Z" level=warning msg="cleaning up after shim disconnected" id=15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa namespace=k8s.io Feb 13 19:57:30.838023 containerd[1435]: time="2025-02-13T19:57:30.838020985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:30.838728 containerd[1435]: time="2025-02-13T19:57:30.838694015Z" level=info msg="StopContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" returns successfully" Feb 13 19:57:30.839668 containerd[1435]: time="2025-02-13T19:57:30.839540803Z" level=info msg="StopPodSandbox for \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\"" Feb 13 19:57:30.839668 containerd[1435]: time="2025-02-13T19:57:30.839573323Z" level=info msg="Container to stop \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.842623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78-shm.mount: Deactivated successfully. Feb 13 19:57:30.850503 containerd[1435]: time="2025-02-13T19:57:30.850458770Z" level=info msg="StopContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" returns successfully" Feb 13 19:57:30.851211 containerd[1435]: time="2025-02-13T19:57:30.851140361Z" level=info msg="StopPodSandbox for \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\"" Feb 13 19:57:30.851211 containerd[1435]: time="2025-02-13T19:57:30.851182200Z" level=info msg="Container to stop \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.851311 containerd[1435]: time="2025-02-13T19:57:30.851235879Z" level=info msg="Container to stop \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.851311 containerd[1435]: time="2025-02-13T19:57:30.851247359Z" level=info msg="Container to stop \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.851311 containerd[1435]: time="2025-02-13T19:57:30.851256919Z" level=info msg="Container to stop \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.851311 containerd[1435]: time="2025-02-13T19:57:30.851267159Z" level=info msg="Container to stop \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:57:30.852761 systemd[1]: cri-containerd-3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78.scope: Deactivated successfully. Feb 13 19:57:30.864618 systemd[1]: cri-containerd-1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9.scope: Deactivated successfully. Feb 13 19:57:30.886316 containerd[1435]: time="2025-02-13T19:57:30.886252268Z" level=info msg="shim disconnected" id=1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9 namespace=k8s.io Feb 13 19:57:30.886316 containerd[1435]: time="2025-02-13T19:57:30.886311107Z" level=warning msg="cleaning up after shim disconnected" id=1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9 namespace=k8s.io Feb 13 19:57:30.886316 containerd[1435]: time="2025-02-13T19:57:30.886324467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:30.888545 containerd[1435]: time="2025-02-13T19:57:30.886798940Z" level=info msg="shim disconnected" id=3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78 namespace=k8s.io Feb 13 19:57:30.888545 containerd[1435]: time="2025-02-13T19:57:30.886837540Z" level=warning msg="cleaning up after shim disconnected" id=3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78 namespace=k8s.io Feb 13 19:57:30.888545 containerd[1435]: time="2025-02-13T19:57:30.886847140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:30.897810 containerd[1435]: time="2025-02-13T19:57:30.897767466Z" level=info msg="TearDown network for sandbox \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" successfully" Feb 13 19:57:30.897810 containerd[1435]: time="2025-02-13T19:57:30.897801266Z" level=info msg="StopPodSandbox for \"1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9\" returns successfully" Feb 13 19:57:30.921535 containerd[1435]: time="2025-02-13T19:57:30.921381375Z" level=info msg="TearDown network for sandbox \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\" successfully" Feb 13 19:57:30.922019 containerd[1435]: time="2025-02-13T19:57:30.921964647Z" level=info msg="StopPodSandbox for \"3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78\" returns successfully" Feb 13 19:57:30.943967 kubelet[2467]: I0213 19:57:30.943921 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tst64\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-kube-api-access-tst64\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.943967 kubelet[2467]: I0213 19:57:30.943965 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hostproc\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.943985 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-run\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.944004 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-kernel\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.944027 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-xtables-lock\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.944045 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-config-path\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.944059 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cni-path\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945265 kubelet[2467]: I0213 19:57:30.944074 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-bpf-maps\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944092 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-etc-cni-netd\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944109 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4ae7822-ba56-4662-8bca-13f47dbe7eed-clustermesh-secrets\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944123 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-cgroup\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944146 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hubble-tls\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944161 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-lib-modules\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.945402 kubelet[2467]: I0213 19:57:30.944243 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-net\") pod \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\" (UID: \"d4ae7822-ba56-4662-8bca-13f47dbe7eed\") " Feb 13 19:57:30.946395 kubelet[2467]: I0213 19:57:30.946341 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.948944 kubelet[2467]: I0213 19:57:30.948906 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:57:30.950115 kubelet[2467]: I0213 19:57:30.949609 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hostproc" (OuterVolumeSpecName: "hostproc") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950246 kubelet[2467]: I0213 19:57:30.949630 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950246 kubelet[2467]: I0213 19:57:30.949641 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cni-path" (OuterVolumeSpecName: "cni-path") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950246 kubelet[2467]: I0213 19:57:30.949970 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950246 kubelet[2467]: I0213 19:57:30.949989 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950246 kubelet[2467]: I0213 19:57:30.950005 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950567 kubelet[2467]: I0213 19:57:30.950017 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.950567 kubelet[2467]: I0213 19:57:30.950027 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.951509 kubelet[2467]: I0213 19:57:30.951409 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-kube-api-access-tst64" (OuterVolumeSpecName: "kube-api-access-tst64") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "kube-api-access-tst64". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:57:30.951824 kubelet[2467]: I0213 19:57:30.951793 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:57:30.953792 kubelet[2467]: I0213 19:57:30.953755 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:57:30.956132 kubelet[2467]: I0213 19:57:30.956091 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4ae7822-ba56-4662-8bca-13f47dbe7eed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d4ae7822-ba56-4662-8bca-13f47dbe7eed" (UID: "d4ae7822-ba56-4662-8bca-13f47dbe7eed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045274 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9943eba-7ca1-4f12-a782-2729c46c8bb0-cilium-config-path\") pod \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\" (UID: \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\") " Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045388 2467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xxxx\" (UniqueName: \"kubernetes.io/projected/d9943eba-7ca1-4f12-a782-2729c46c8bb0-kube-api-access-7xxxx\") pod \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\" (UID: \"d9943eba-7ca1-4f12-a782-2729c46c8bb0\") " Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045428 2467 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045439 2467 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045448 2467 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045456 2467 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4ae7822-ba56-4662-8bca-13f47dbe7eed-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.045907 kubelet[2467]: I0213 19:57:31.045472 2467 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045482 2467 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045491 2467 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045498 2467 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045506 2467 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045514 2467 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045521 2467 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045552 2467 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tst64\" (UniqueName: \"kubernetes.io/projected/d4ae7822-ba56-4662-8bca-13f47dbe7eed-kube-api-access-tst64\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046149 kubelet[2467]: I0213 19:57:31.045562 2467 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.046333 kubelet[2467]: I0213 19:57:31.045570 2467 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4ae7822-ba56-4662-8bca-13f47dbe7eed-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.048086 kubelet[2467]: I0213 19:57:31.048031 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9943eba-7ca1-4f12-a782-2729c46c8bb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9943eba-7ca1-4f12-a782-2729c46c8bb0" (UID: "d9943eba-7ca1-4f12-a782-2729c46c8bb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:57:31.048725 kubelet[2467]: I0213 19:57:31.048684 2467 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9943eba-7ca1-4f12-a782-2729c46c8bb0-kube-api-access-7xxxx" (OuterVolumeSpecName: "kube-api-access-7xxxx") pod "d9943eba-7ca1-4f12-a782-2729c46c8bb0" (UID: "d9943eba-7ca1-4f12-a782-2729c46c8bb0"). InnerVolumeSpecName "kube-api-access-7xxxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:57:31.146255 kubelet[2467]: I0213 19:57:31.146208 2467 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9943eba-7ca1-4f12-a782-2729c46c8bb0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.146255 kubelet[2467]: I0213 19:57:31.146243 2467 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xxxx\" (UniqueName: \"kubernetes.io/projected/d9943eba-7ca1-4f12-a782-2729c46c8bb0-kube-api-access-7xxxx\") on node \"localhost\" DevicePath \"\"" Feb 13 19:57:31.658044 kubelet[2467]: I0213 19:57:31.657809 2467 scope.go:117] "RemoveContainer" containerID="dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6" Feb 13 19:57:31.660126 containerd[1435]: time="2025-02-13T19:57:31.659072680Z" level=info msg="RemoveContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\"" Feb 13 19:57:31.660511 systemd[1]: Removed slice kubepods-besteffort-podd9943eba_7ca1_4f12_a782_2729c46c8bb0.slice - libcontainer container kubepods-besteffort-podd9943eba_7ca1_4f12_a782_2729c46c8bb0.slice. Feb 13 19:57:31.663102 containerd[1435]: time="2025-02-13T19:57:31.662563556Z" level=info msg="RemoveContainer for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" returns successfully" Feb 13 19:57:31.663158 kubelet[2467]: I0213 19:57:31.662797 2467 scope.go:117] "RemoveContainer" containerID="dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6" Feb 13 19:57:31.664279 containerd[1435]: time="2025-02-13T19:57:31.663746581Z" level=error msg="ContainerStatus for \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\": not found" Feb 13 19:57:31.665813 systemd[1]: Removed slice kubepods-burstable-podd4ae7822_ba56_4662_8bca_13f47dbe7eed.slice - libcontainer container kubepods-burstable-podd4ae7822_ba56_4662_8bca_13f47dbe7eed.slice. Feb 13 19:57:31.665911 systemd[1]: kubepods-burstable-podd4ae7822_ba56_4662_8bca_13f47dbe7eed.slice: Consumed 6.528s CPU time. Feb 13 19:57:31.678133 kubelet[2467]: E0213 19:57:31.678070 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\": not found" containerID="dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6" Feb 13 19:57:31.678465 kubelet[2467]: I0213 19:57:31.678261 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6"} err="failed to get container status \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbd421fc6e1937ef3803c4dcf2144a0e8c0cf5c1c353c9f657905070c31aa7c6\": not found" Feb 13 19:57:31.678465 kubelet[2467]: I0213 19:57:31.678369 2467 scope.go:117] "RemoveContainer" containerID="15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa" Feb 13 19:57:31.679637 containerd[1435]: time="2025-02-13T19:57:31.679592464Z" level=info msg="RemoveContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\"" Feb 13 19:57:31.688753 containerd[1435]: time="2025-02-13T19:57:31.688697390Z" level=info msg="RemoveContainer for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" returns successfully" Feb 13 19:57:31.689011 kubelet[2467]: I0213 19:57:31.688977 2467 scope.go:117] "RemoveContainer" containerID="6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a" Feb 13 19:57:31.689998 containerd[1435]: time="2025-02-13T19:57:31.689968014Z" level=info msg="RemoveContainer for \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\"" Feb 13 19:57:31.692267 containerd[1435]: time="2025-02-13T19:57:31.692230226Z" level=info msg="RemoveContainer for \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\" returns successfully" Feb 13 19:57:31.692432 kubelet[2467]: I0213 19:57:31.692402 2467 scope.go:117] "RemoveContainer" containerID="5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00" Feb 13 19:57:31.693994 containerd[1435]: time="2025-02-13T19:57:31.693514610Z" level=info msg="RemoveContainer for \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\"" Feb 13 19:57:31.695787 containerd[1435]: time="2025-02-13T19:57:31.695746502Z" level=info msg="RemoveContainer for \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\" returns successfully" Feb 13 19:57:31.695936 kubelet[2467]: I0213 19:57:31.695905 2467 scope.go:117] "RemoveContainer" containerID="a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459" Feb 13 19:57:31.696906 containerd[1435]: time="2025-02-13T19:57:31.696874168Z" level=info msg="RemoveContainer for \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\"" Feb 13 19:57:31.698979 containerd[1435]: time="2025-02-13T19:57:31.698949062Z" level=info msg="RemoveContainer for \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\" returns successfully" Feb 13 19:57:31.699157 kubelet[2467]: I0213 19:57:31.699133 2467 scope.go:117] "RemoveContainer" containerID="bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb" Feb 13 19:57:31.700259 containerd[1435]: time="2025-02-13T19:57:31.700231926Z" level=info msg="RemoveContainer for \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\"" Feb 13 19:57:31.702411 containerd[1435]: time="2025-02-13T19:57:31.702370099Z" level=info msg="RemoveContainer for \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\" returns successfully" Feb 13 19:57:31.702579 kubelet[2467]: I0213 19:57:31.702545 2467 scope.go:117] "RemoveContainer" containerID="15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa" Feb 13 19:57:31.702838 containerd[1435]: time="2025-02-13T19:57:31.702803454Z" level=error msg="ContainerStatus for \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\": not found" Feb 13 19:57:31.703017 kubelet[2467]: E0213 19:57:31.702991 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\": not found" containerID="15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa" Feb 13 19:57:31.703062 kubelet[2467]: I0213 19:57:31.703026 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa"} err="failed to get container status \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"15c08ec5592213d3c745dfd5151659e9df85b4e880aaa0d0e3b69a198139d8aa\": not found" Feb 13 19:57:31.703062 kubelet[2467]: I0213 19:57:31.703050 2467 scope.go:117] "RemoveContainer" containerID="6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a" Feb 13 19:57:31.703283 containerd[1435]: time="2025-02-13T19:57:31.703244768Z" level=error msg="ContainerStatus for \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\": not found" Feb 13 19:57:31.703467 kubelet[2467]: E0213 19:57:31.703407 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\": not found" containerID="6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a" Feb 13 19:57:31.703467 kubelet[2467]: I0213 19:57:31.703446 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a"} err="failed to get container status \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d5be4ab65c17e9727c799956225f1c7fa679871fe1ba8a87496a9567aa9639a\": not found" Feb 13 19:57:31.703623 kubelet[2467]: I0213 19:57:31.703473 2467 scope.go:117] "RemoveContainer" containerID="5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00" Feb 13 19:57:31.703653 containerd[1435]: time="2025-02-13T19:57:31.703632844Z" level=error msg="ContainerStatus for \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\": not found" Feb 13 19:57:31.703909 kubelet[2467]: E0213 19:57:31.703767 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\": not found" containerID="5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00" Feb 13 19:57:31.703909 kubelet[2467]: I0213 19:57:31.703797 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00"} err="failed to get container status \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\": rpc error: code = NotFound desc = an error occurred when try to find container \"5871daf586d4bfbc95e6347090bfc01b39da65fffd9fb24da5a1d701c1f64f00\": not found" Feb 13 19:57:31.703909 kubelet[2467]: I0213 19:57:31.703814 2467 scope.go:117] "RemoveContainer" containerID="a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459" Feb 13 19:57:31.704145 containerd[1435]: time="2025-02-13T19:57:31.704093958Z" level=error msg="ContainerStatus for \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\": not found" Feb 13 19:57:31.704268 kubelet[2467]: E0213 19:57:31.704247 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\": not found" containerID="a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459" Feb 13 19:57:31.704313 kubelet[2467]: I0213 19:57:31.704276 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459"} err="failed to get container status \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\": rpc error: code = NotFound desc = an error occurred when try to find container \"a52ff7bb1b179e7dacea22f0e96b3e480c4216fde27f0f57c6cb30d718ce6459\": not found" Feb 13 19:57:31.704313 kubelet[2467]: I0213 19:57:31.704292 2467 scope.go:117] "RemoveContainer" containerID="bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb" Feb 13 19:57:31.704557 containerd[1435]: time="2025-02-13T19:57:31.704513953Z" level=error msg="ContainerStatus for \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\": not found" Feb 13 19:57:31.704828 kubelet[2467]: E0213 19:57:31.704791 2467 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\": not found" containerID="bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb" Feb 13 19:57:31.704828 kubelet[2467]: I0213 19:57:31.704813 2467 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb"} err="failed to get container status \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc222af8cd4ec43f2dda01500133f362c10212041b7239a6a7b5aaf090a086eb\": not found" Feb 13 19:57:31.764703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aa3f073dfea3886566334ab8789ef7b2896a04000d61bf45e826e896feb4a78-rootfs.mount: Deactivated successfully. Feb 13 19:57:31.764803 systemd[1]: var-lib-kubelet-pods-d9943eba\x2d7ca1\x2d4f12\x2da782\x2d2729c46c8bb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xxxx.mount: Deactivated successfully. Feb 13 19:57:31.764865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9-rootfs.mount: Deactivated successfully. Feb 13 19:57:31.764922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fe914cd002f4f8eca715365be27e284447b597071a487d056c2d7387c7ef3f9-shm.mount: Deactivated successfully. Feb 13 19:57:31.764976 systemd[1]: var-lib-kubelet-pods-d4ae7822\x2dba56\x2d4662\x2d8bca\x2d13f47dbe7eed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtst64.mount: Deactivated successfully. Feb 13 19:57:31.765027 systemd[1]: var-lib-kubelet-pods-d4ae7822\x2dba56\x2d4662\x2d8bca\x2d13f47dbe7eed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:57:31.765081 systemd[1]: var-lib-kubelet-pods-d4ae7822\x2dba56\x2d4662\x2d8bca\x2d13f47dbe7eed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:57:32.449295 kubelet[2467]: I0213 19:57:32.449259 2467 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ae7822-ba56-4662-8bca-13f47dbe7eed" path="/var/lib/kubelet/pods/d4ae7822-ba56-4662-8bca-13f47dbe7eed/volumes" Feb 13 19:57:32.449833 kubelet[2467]: I0213 19:57:32.449796 2467 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9943eba-7ca1-4f12-a782-2729c46c8bb0" path="/var/lib/kubelet/pods/d9943eba-7ca1-4f12-a782-2729c46c8bb0/volumes" Feb 13 19:57:32.714775 sshd[4117]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:32.723722 systemd[1]: sshd@23-10.0.0.127:22-10.0.0.1:41362.service: Deactivated successfully. Feb 13 19:57:32.726310 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:57:32.726589 systemd[1]: session-24.scope: Consumed 2.212s CPU time. Feb 13 19:57:32.727733 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:57:32.734444 systemd[1]: Started sshd@24-10.0.0.127:22-10.0.0.1:59538.service - OpenSSH per-connection server daemon (10.0.0.1:59538). Feb 13 19:57:32.735337 systemd-logind[1418]: Removed session 24. Feb 13 19:57:32.771973 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 59538 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:32.773422 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:32.777697 systemd-logind[1418]: New session 25 of user core. Feb 13 19:57:32.783349 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:57:33.484943 sshd[4281]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:33.498439 systemd[1]: sshd@24-10.0.0.127:22-10.0.0.1:59538.service: Deactivated successfully. Feb 13 19:57:33.501767 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:57:33.507667 kubelet[2467]: I0213 19:57:33.506819 2467 memory_manager.go:355] "RemoveStaleState removing state" podUID="d4ae7822-ba56-4662-8bca-13f47dbe7eed" containerName="cilium-agent" Feb 13 19:57:33.507667 kubelet[2467]: I0213 19:57:33.506852 2467 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9943eba-7ca1-4f12-a782-2729c46c8bb0" containerName="cilium-operator" Feb 13 19:57:33.508400 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:57:33.516580 systemd[1]: Started sshd@25-10.0.0.127:22-10.0.0.1:59542.service - OpenSSH per-connection server daemon (10.0.0.1:59542). Feb 13 19:57:33.521590 systemd-logind[1418]: Removed session 25. Feb 13 19:57:33.528608 systemd[1]: Created slice kubepods-burstable-podc6ebb261_ca8c_47f5_bd76_17c30967019a.slice - libcontainer container kubepods-burstable-podc6ebb261_ca8c_47f5_bd76_17c30967019a.slice. Feb 13 19:57:33.552722 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 59542 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:33.554083 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.558938 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-xtables-lock\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.558978 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-host-proc-sys-net\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.559010 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-cilium-run\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.559029 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-bpf-maps\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.559044 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-host-proc-sys-kernel\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561212 kubelet[2467]: I0213 19:57:33.559063 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6ebb261-ca8c-47f5-bd76-17c30967019a-clustermesh-secrets\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559079 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96k8\" (UniqueName: \"kubernetes.io/projected/c6ebb261-ca8c-47f5-bd76-17c30967019a-kube-api-access-b96k8\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559096 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-cni-path\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559112 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-etc-cni-netd\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559127 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6ebb261-ca8c-47f5-bd76-17c30967019a-hubble-tls\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559142 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6ebb261-ca8c-47f5-bd76-17c30967019a-cilium-ipsec-secrets\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561433 kubelet[2467]: I0213 19:57:33.559158 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-hostproc\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561562 kubelet[2467]: I0213 19:57:33.559177 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-cilium-cgroup\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561562 kubelet[2467]: I0213 19:57:33.559211 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6ebb261-ca8c-47f5-bd76-17c30967019a-lib-modules\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.561562 kubelet[2467]: I0213 19:57:33.559229 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6ebb261-ca8c-47f5-bd76-17c30967019a-cilium-config-path\") pod \"cilium-6957f\" (UID: \"c6ebb261-ca8c-47f5-bd76-17c30967019a\") " pod="kube-system/cilium-6957f" Feb 13 19:57:33.562780 systemd-logind[1418]: New session 26 of user core. Feb 13 19:57:33.571387 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:57:33.625256 sshd[4294]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:33.640934 systemd[1]: sshd@25-10.0.0.127:22-10.0.0.1:59542.service: Deactivated successfully. Feb 13 19:57:33.643132 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:57:33.644682 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:57:33.645986 systemd[1]: Started sshd@26-10.0.0.127:22-10.0.0.1:59552.service - OpenSSH per-connection server daemon (10.0.0.1:59552). Feb 13 19:57:33.646710 systemd-logind[1418]: Removed session 26. Feb 13 19:57:33.689195 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 59552 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:33.690542 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:33.694149 systemd-logind[1418]: New session 27 of user core. Feb 13 19:57:33.704417 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:57:33.832448 kubelet[2467]: E0213 19:57:33.831941 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:33.833373 containerd[1435]: time="2025-02-13T19:57:33.833253261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6957f,Uid:c6ebb261-ca8c-47f5-bd76-17c30967019a,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:33.853599 containerd[1435]: time="2025-02-13T19:57:33.852710235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:33.853599 containerd[1435]: time="2025-02-13T19:57:33.852776955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:33.853599 containerd[1435]: time="2025-02-13T19:57:33.852787674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:33.853599 containerd[1435]: time="2025-02-13T19:57:33.852879714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:33.877427 systemd[1]: Started cri-containerd-90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd.scope - libcontainer container 90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd. Feb 13 19:57:33.897357 containerd[1435]: time="2025-02-13T19:57:33.897311290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6957f,Uid:c6ebb261-ca8c-47f5-bd76-17c30967019a,Namespace:kube-system,Attempt:0,} returns sandbox id \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\"" Feb 13 19:57:33.897986 kubelet[2467]: E0213 19:57:33.897961 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:33.901029 containerd[1435]: time="2025-02-13T19:57:33.900995735Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:57:33.911495 containerd[1435]: time="2025-02-13T19:57:33.911442515Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b\"" Feb 13 19:57:33.912088 containerd[1435]: time="2025-02-13T19:57:33.912049950Z" level=info msg="StartContainer for \"3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b\"" Feb 13 19:57:33.941366 systemd[1]: Started cri-containerd-3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b.scope - libcontainer container 3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b. Feb 13 19:57:33.962582 containerd[1435]: time="2025-02-13T19:57:33.962539389Z" level=info msg="StartContainer for \"3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b\" returns successfully" Feb 13 19:57:33.975931 systemd[1]: cri-containerd-3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b.scope: Deactivated successfully. Feb 13 19:57:34.012879 containerd[1435]: time="2025-02-13T19:57:34.012798407Z" level=info msg="shim disconnected" id=3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b namespace=k8s.io Feb 13 19:57:34.012879 containerd[1435]: time="2025-02-13T19:57:34.012863447Z" level=warning msg="cleaning up after shim disconnected" id=3877f4a14cea409dfa312235316ad99cca385a895347a06e0676b5b5ff6fb92b namespace=k8s.io Feb 13 19:57:34.012879 containerd[1435]: time="2025-02-13T19:57:34.012871887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:34.668632 kubelet[2467]: E0213 19:57:34.668598 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:34.670591 containerd[1435]: time="2025-02-13T19:57:34.670102549Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:57:34.681895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399254869.mount: Deactivated successfully. Feb 13 19:57:34.682369 containerd[1435]: time="2025-02-13T19:57:34.682050612Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b\"" Feb 13 19:57:34.684058 containerd[1435]: time="2025-02-13T19:57:34.682724407Z" level=info msg="StartContainer for \"1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b\"" Feb 13 19:57:34.711339 systemd[1]: Started cri-containerd-1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b.scope - libcontainer container 1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b. Feb 13 19:57:34.732304 containerd[1435]: time="2025-02-13T19:57:34.732268724Z" level=info msg="StartContainer for \"1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b\" returns successfully" Feb 13 19:57:34.746779 systemd[1]: cri-containerd-1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b.scope: Deactivated successfully. Feb 13 19:57:34.765212 containerd[1435]: time="2025-02-13T19:57:34.765142017Z" level=info msg="shim disconnected" id=1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b namespace=k8s.io Feb 13 19:57:34.765212 containerd[1435]: time="2025-02-13T19:57:34.765199577Z" level=warning msg="cleaning up after shim disconnected" id=1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b namespace=k8s.io Feb 13 19:57:34.765212 containerd[1435]: time="2025-02-13T19:57:34.765210577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:35.519158 kubelet[2467]: E0213 19:57:35.519090 2467 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:57:35.666159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a8f0de144a0d625c748d23fec21be969da03277eb51bbed450fb81ef061975b-rootfs.mount: Deactivated successfully. Feb 13 19:57:35.672310 kubelet[2467]: E0213 19:57:35.672280 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:35.675353 containerd[1435]: time="2025-02-13T19:57:35.675319905Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:57:35.689678 containerd[1435]: time="2025-02-13T19:57:35.689559849Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e\"" Feb 13 19:57:35.691049 containerd[1435]: time="2025-02-13T19:57:35.689944726Z" level=info msg="StartContainer for \"c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e\"" Feb 13 19:57:35.715342 systemd[1]: Started cri-containerd-c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e.scope - libcontainer container c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e. Feb 13 19:57:35.737587 containerd[1435]: time="2025-02-13T19:57:35.737435285Z" level=info msg="StartContainer for \"c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e\" returns successfully" Feb 13 19:57:35.739050 systemd[1]: cri-containerd-c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e.scope: Deactivated successfully. Feb 13 19:57:35.759065 containerd[1435]: time="2025-02-13T19:57:35.758916500Z" level=info msg="shim disconnected" id=c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e namespace=k8s.io Feb 13 19:57:35.759065 containerd[1435]: time="2025-02-13T19:57:35.758970900Z" level=warning msg="cleaning up after shim disconnected" id=c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e namespace=k8s.io Feb 13 19:57:35.759065 containerd[1435]: time="2025-02-13T19:57:35.758978820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:36.666141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c65d7d1cf64ee19b4e69c0d17bac20cb178add7c5b6da7e1528a578325a0fb6e-rootfs.mount: Deactivated successfully. Feb 13 19:57:36.675574 kubelet[2467]: E0213 19:57:36.675481 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:36.678940 containerd[1435]: time="2025-02-13T19:57:36.678749737Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:57:36.689641 containerd[1435]: time="2025-02-13T19:57:36.689601918Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808\"" Feb 13 19:57:36.691364 containerd[1435]: time="2025-02-13T19:57:36.690316074Z" level=info msg="StartContainer for \"0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808\"" Feb 13 19:57:36.720445 systemd[1]: Started cri-containerd-0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808.scope - libcontainer container 0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808. Feb 13 19:57:36.739901 systemd[1]: cri-containerd-0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808.scope: Deactivated successfully. Feb 13 19:57:36.741508 containerd[1435]: time="2025-02-13T19:57:36.741462356Z" level=info msg="StartContainer for \"0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808\" returns successfully" Feb 13 19:57:36.755587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808-rootfs.mount: Deactivated successfully. Feb 13 19:57:36.760601 containerd[1435]: time="2025-02-13T19:57:36.760408493Z" level=info msg="shim disconnected" id=0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808 namespace=k8s.io Feb 13 19:57:36.760601 containerd[1435]: time="2025-02-13T19:57:36.760461133Z" level=warning msg="cleaning up after shim disconnected" id=0b00c34ca5f45482e109e5900472ce41300f7b2608c559391362bd7579e2b808 namespace=k8s.io Feb 13 19:57:36.760601 containerd[1435]: time="2025-02-13T19:57:36.760469413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:37.679622 kubelet[2467]: E0213 19:57:37.678752 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:37.681844 containerd[1435]: time="2025-02-13T19:57:37.681807272Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:57:37.694595 containerd[1435]: time="2025-02-13T19:57:37.694548259Z" level=info msg="CreateContainer within sandbox \"90839805475c108921cdcac5ed3031c2d409ec5adb804b1e6b577b101ff51abd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63\"" Feb 13 19:57:37.695000 containerd[1435]: time="2025-02-13T19:57:37.694971817Z" level=info msg="StartContainer for \"58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63\"" Feb 13 19:57:37.714392 systemd[1]: run-containerd-runc-k8s.io-58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63-runc.wycenh.mount: Deactivated successfully. Feb 13 19:57:37.726468 systemd[1]: Started cri-containerd-58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63.scope - libcontainer container 58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63. Feb 13 19:57:37.747979 containerd[1435]: time="2025-02-13T19:57:37.747858717Z" level=info msg="StartContainer for \"58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63\" returns successfully" Feb 13 19:57:38.027236 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:57:38.685061 kubelet[2467]: E0213 19:57:38.684863 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:38.708892 kubelet[2467]: I0213 19:57:38.708541 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6957f" podStartSLOduration=5.708527443 podStartE2EDuration="5.708527443s" podCreationTimestamp="2025-02-13 19:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:57:38.708416243 +0000 UTC m=+88.337282167" watchObservedRunningTime="2025-02-13 19:57:38.708527443 +0000 UTC m=+88.337393367" Feb 13 19:57:39.447026 kubelet[2467]: E0213 19:57:39.446984 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:39.833375 kubelet[2467]: E0213 19:57:39.833260 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:40.856290 systemd-networkd[1374]: lxc_health: Link UP Feb 13 19:57:40.867480 systemd-networkd[1374]: lxc_health: Gained carrier Feb 13 19:57:41.833599 kubelet[2467]: E0213 19:57:41.833555 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:42.126845 systemd[1]: run-containerd-runc-k8s.io-58d67ac7b8ae8f0418a7c51a94d9e426e1dfca2fffb6bdc2af5718659a663a63-runc.UBQOj7.mount: Deactivated successfully. Feb 13 19:57:42.691973 kubelet[2467]: E0213 19:57:42.691905 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:42.854862 systemd-networkd[1374]: lxc_health: Gained IPv6LL Feb 13 19:57:43.447311 kubelet[2467]: E0213 19:57:43.447275 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:43.693976 kubelet[2467]: E0213 19:57:43.693934 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:46.401722 sshd[4302]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:46.404420 systemd[1]: sshd@26-10.0.0.127:22-10.0.0.1:59552.service: Deactivated successfully. Feb 13 19:57:46.406421 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:57:46.407898 systemd-logind[1418]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:57:46.409081 systemd-logind[1418]: Removed session 27.