Feb 13 19:09:19.913247 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:09:19.913269 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 19:09:19.913280 kernel: KASLR enabled Feb 13 19:09:19.913286 kernel: efi: EFI v2.7 by EDK II Feb 13 19:09:19.913292 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:09:19.913297 kernel: random: crng init done Feb 13 19:09:19.913304 kernel: secureboot: Secure boot disabled Feb 13 19:09:19.913310 kernel: ACPI: Early table checksum verification disabled Feb 13 19:09:19.913316 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:09:19.913324 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:09:19.913334 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913339 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913345 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913353 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913361 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913369 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913376 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913382 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913388 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:19.913394 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:09:19.913400 kernel: NUMA: Failed to initialise from firmware Feb 13 19:09:19.913406 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:19.913412 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:09:19.913418 kernel: Zone ranges: Feb 13 19:09:19.913424 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:19.913431 kernel: DMA32 empty Feb 13 19:09:19.913437 kernel: Normal empty Feb 13 19:09:19.913443 kernel: Movable zone start for each node Feb 13 19:09:19.913449 kernel: Early memory node ranges Feb 13 19:09:19.913455 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:09:19.913461 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:09:19.913467 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:09:19.913473 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:09:19.913479 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:09:19.913485 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:09:19.913490 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:09:19.913496 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:09:19.913504 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:09:19.913510 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:19.913516 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:09:19.913525 kernel: psci: probing for conduit method from ACPI. Feb 13 19:09:19.913532 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:09:19.913538 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:09:19.913546 kernel: psci: Trusted OS migration not required Feb 13 19:09:19.913553 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:09:19.913559 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:09:19.913566 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:09:19.913573 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:09:19.913579 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:09:19.913586 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:09:19.913592 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:09:19.913599 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:09:19.913605 kernel: CPU features: detected: Spectre-v4 Feb 13 19:09:19.913626 kernel: CPU features: detected: Spectre-BHB Feb 13 19:09:19.913662 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:09:19.913678 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:09:19.913685 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:09:19.913691 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:09:19.913698 kernel: alternatives: applying boot alternatives Feb 13 19:09:19.913705 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:09:19.913712 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:09:19.913718 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:09:19.913745 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:09:19.913769 kernel: Fallback order for Node 0: 0 Feb 13 19:09:19.913777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:09:19.913783 kernel: Policy zone: DMA Feb 13 19:09:19.913790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:09:19.913796 kernel: software IO TLB: area num 4. Feb 13 19:09:19.913802 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:09:19.913810 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 19:09:19.913859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:09:19.913867 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:09:19.913874 kernel: rcu: RCU event tracing is enabled. Feb 13 19:09:19.913881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:09:19.913888 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:09:19.913894 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:09:19.913941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:09:19.913948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:09:19.913954 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:09:19.913960 kernel: GICv3: 256 SPIs implemented Feb 13 19:09:19.913967 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:09:19.913973 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:09:19.914002 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:09:19.914024 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:09:19.914031 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:09:19.914037 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:09:19.914044 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:09:19.914051 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:09:19.914058 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:09:19.914064 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:09:19.914071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:19.914103 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:09:19.914121 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:09:19.914127 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:09:19.914134 kernel: arm-pv: using stolen time PV Feb 13 19:09:19.914140 kernel: Console: colour dummy device 80x25 Feb 13 19:09:19.914147 kernel: ACPI: Core revision 20230628 Feb 13 19:09:19.914155 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:09:19.914210 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:09:19.914217 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:09:19.914224 kernel: landlock: Up and running. Feb 13 19:09:19.914234 kernel: SELinux: Initializing. Feb 13 19:09:19.914241 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:09:19.914248 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:09:19.914254 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:09:19.914261 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:09:19.914268 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:09:19.914276 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:09:19.914282 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:09:19.914289 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:09:19.914295 kernel: Remapping and enabling EFI services. Feb 13 19:09:19.914302 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:09:19.914308 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:09:19.914315 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:09:19.914321 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:09:19.914328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:19.914336 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:09:19.914343 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:09:19.914355 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:09:19.914363 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:09:19.914370 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:19.914377 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:09:19.914384 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:09:19.914390 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:09:19.914397 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:09:19.914406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:19.914412 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:09:19.914419 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:09:19.914426 kernel: SMP: Total of 4 processors activated. Feb 13 19:09:19.914433 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:09:19.914440 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:09:19.914447 kernel: CPU features: detected: Common not Private translations Feb 13 19:09:19.914454 kernel: CPU features: detected: CRC32 instructions Feb 13 19:09:19.914462 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:09:19.914469 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:09:19.914476 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:09:19.914483 kernel: CPU features: detected: Privileged Access Never Feb 13 19:09:19.914489 kernel: CPU features: detected: RAS Extension Support Feb 13 19:09:19.914496 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:09:19.914503 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:09:19.914510 kernel: alternatives: applying system-wide alternatives Feb 13 19:09:19.914517 kernel: devtmpfs: initialized Feb 13 19:09:19.914525 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:09:19.914533 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:09:19.914539 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:09:19.914546 kernel: SMBIOS 3.0.0 present. Feb 13 19:09:19.914553 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:09:19.914560 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:09:19.914567 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:09:19.914574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:09:19.914581 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:09:19.914590 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:09:19.914597 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:09:19.914603 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:09:19.914610 kernel: cpuidle: using governor menu Feb 13 19:09:19.914617 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:09:19.914624 kernel: ASID allocator initialised with 32768 entries Feb 13 19:09:19.914631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:09:19.914638 kernel: Serial: AMBA PL011 UART driver Feb 13 19:09:19.914645 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:09:19.914655 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:09:19.914664 kernel: Modules: 508880 pages in range for PLT usage Feb 13 19:09:19.914671 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:09:19.914678 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:09:19.914685 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:09:19.914692 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:09:19.914699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:09:19.914706 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:09:19.914715 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:09:19.914723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:09:19.914731 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:09:19.914738 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:09:19.914745 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:09:19.914752 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:09:19.914760 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:09:19.914770 kernel: ACPI: Interpreter enabled Feb 13 19:09:19.914778 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:09:19.914789 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:09:19.914796 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:09:19.914804 kernel: printk: console [ttyAMA0] enabled Feb 13 19:09:19.914811 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:09:19.914983 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:09:19.915062 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:09:19.915129 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:09:19.915197 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:09:19.915272 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:09:19.915286 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:09:19.915293 kernel: PCI host bridge to bus 0000:00 Feb 13 19:09:19.915371 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:09:19.915434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:09:19.915495 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:09:19.915555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:09:19.915639 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:09:19.915722 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:09:19.915793 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:09:19.915891 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:09:19.915961 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:09:19.916029 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:09:19.916098 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:09:19.916171 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:09:19.916239 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:09:19.916303 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:09:19.916364 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:09:19.916373 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:09:19.916381 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:09:19.916388 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:09:19.916395 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:09:19.916405 kernel: iommu: Default domain type: Translated Feb 13 19:09:19.916412 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:09:19.916419 kernel: efivars: Registered efivars operations Feb 13 19:09:19.916426 kernel: vgaarb: loaded Feb 13 19:09:19.916433 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:09:19.916440 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:09:19.916448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:09:19.916455 kernel: pnp: PnP ACPI init Feb 13 19:09:19.916533 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:09:19.916546 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:09:19.916553 kernel: NET: Registered PF_INET protocol family Feb 13 19:09:19.916561 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:09:19.916568 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:09:19.916576 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:09:19.916583 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:09:19.916590 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:09:19.916597 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:09:19.916606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:09:19.916614 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:09:19.916621 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:09:19.916628 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:09:19.916636 kernel: kvm [1]: HYP mode not available Feb 13 19:09:19.916643 kernel: Initialise system trusted keyrings Feb 13 19:09:19.916650 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:09:19.916657 kernel: Key type asymmetric registered Feb 13 19:09:19.916664 kernel: Asymmetric key parser 'x509' registered Feb 13 19:09:19.916673 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:09:19.916680 kernel: io scheduler mq-deadline registered Feb 13 19:09:19.916687 kernel: io scheduler kyber registered Feb 13 19:09:19.916695 kernel: io scheduler bfq registered Feb 13 19:09:19.916702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:09:19.916709 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:09:19.916716 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:09:19.916782 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:09:19.916792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:09:19.916801 kernel: thunder_xcv, ver 1.0 Feb 13 19:09:19.916808 kernel: thunder_bgx, ver 1.0 Feb 13 19:09:19.916815 kernel: nicpf, ver 1.0 Feb 13 19:09:19.916822 kernel: nicvf, ver 1.0 Feb 13 19:09:19.916910 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:09:19.916988 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:09:19 UTC (1739473759) Feb 13 19:09:19.916998 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:09:19.917006 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:09:19.917013 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:09:19.917023 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:09:19.917031 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:09:19.917038 kernel: Segment Routing with IPv6 Feb 13 19:09:19.917045 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:09:19.917052 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:09:19.917059 kernel: Key type dns_resolver registered Feb 13 19:09:19.917066 kernel: registered taskstats version 1 Feb 13 19:09:19.917074 kernel: Loading compiled-in X.509 certificates Feb 13 19:09:19.917081 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 19:09:19.917090 kernel: Key type .fscrypt registered Feb 13 19:09:19.917097 kernel: Key type fscrypt-provisioning registered Feb 13 19:09:19.917104 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:09:19.917111 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:09:19.917119 kernel: ima: No architecture policies found Feb 13 19:09:19.917126 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:09:19.917133 kernel: clk: Disabling unused clocks Feb 13 19:09:19.917140 kernel: Freeing unused kernel memory: 39936K Feb 13 19:09:19.917149 kernel: Run /init as init process Feb 13 19:09:19.917156 kernel: with arguments: Feb 13 19:09:19.917163 kernel: /init Feb 13 19:09:19.917170 kernel: with environment: Feb 13 19:09:19.917176 kernel: HOME=/ Feb 13 19:09:19.917183 kernel: TERM=linux Feb 13 19:09:19.917190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:09:19.917199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:09:19.917210 systemd[1]: Detected virtualization kvm. Feb 13 19:09:19.917218 systemd[1]: Detected architecture arm64. Feb 13 19:09:19.917225 systemd[1]: Running in initrd. Feb 13 19:09:19.917239 systemd[1]: No hostname configured, using default hostname. Feb 13 19:09:19.917247 systemd[1]: Hostname set to . Feb 13 19:09:19.917255 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:09:19.917262 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:09:19.917270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:19.917279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:19.917288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:09:19.917295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:09:19.917303 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:09:19.917311 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:09:19.917320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:09:19.917328 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:09:19.917337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:19.917345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:19.917353 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:09:19.917361 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:09:19.917368 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:09:19.917376 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:09:19.917383 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:09:19.917391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:09:19.917399 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:09:19.917408 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:09:19.917415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:19.917423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:19.917431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:19.917438 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:09:19.917446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:09:19.917454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:09:19.917461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:09:19.917471 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:09:19.917479 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:09:19.917487 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:09:19.917495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:19.917502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:09:19.917510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:19.917518 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:09:19.917547 systemd-journald[240]: Collecting audit messages is disabled. Feb 13 19:09:19.917567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:09:19.917577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:19.917585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:19.917593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:09:19.917602 systemd-journald[240]: Journal started Feb 13 19:09:19.917625 systemd-journald[240]: Runtime Journal (/run/log/journal/062c9b525fce4393adf5f19c46b642eb) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:09:19.903549 systemd-modules-load[241]: Inserted module 'overlay' Feb 13 19:09:19.921934 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:09:19.921957 kernel: Bridge firewalling registered Feb 13 19:09:19.923557 systemd-modules-load[241]: Inserted module 'br_netfilter' Feb 13 19:09:19.924076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:09:19.926596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:19.938179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:09:19.939866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:09:19.942013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:09:19.946816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:19.948950 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:09:19.950059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:19.951317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:19.954667 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:19.959882 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:09:19.966353 dracut-cmdline[272]: dracut-dracut-053 Feb 13 19:09:19.968909 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:09:19.990270 systemd-resolved[279]: Positive Trust Anchors: Feb 13 19:09:19.990288 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:09:19.990321 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:09:19.995064 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 19:09:19.996051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:09:20.000100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:20.046874 kernel: SCSI subsystem initialized Feb 13 19:09:20.051860 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:09:20.059875 kernel: iscsi: registered transport (tcp) Feb 13 19:09:20.077871 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:09:20.077899 kernel: QLogic iSCSI HBA Driver Feb 13 19:09:20.134931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:09:20.143011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:09:20.163869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:09:20.163930 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:09:20.165560 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:09:20.220873 kernel: raid6: neonx8 gen() 15780 MB/s Feb 13 19:09:20.237869 kernel: raid6: neonx4 gen() 15807 MB/s Feb 13 19:09:20.254865 kernel: raid6: neonx2 gen() 13199 MB/s Feb 13 19:09:20.271866 kernel: raid6: neonx1 gen() 10535 MB/s Feb 13 19:09:20.288862 kernel: raid6: int64x8 gen() 6789 MB/s Feb 13 19:09:20.305864 kernel: raid6: int64x4 gen() 7344 MB/s Feb 13 19:09:20.322862 kernel: raid6: int64x2 gen() 6111 MB/s Feb 13 19:09:20.339965 kernel: raid6: int64x1 gen() 5052 MB/s Feb 13 19:09:20.339978 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s Feb 13 19:09:20.358003 kernel: raid6: .... xor() 12495 MB/s, rmw enabled Feb 13 19:09:20.358015 kernel: raid6: using neon recovery algorithm Feb 13 19:09:20.368406 kernel: xor: measuring software checksum speed Feb 13 19:09:20.368466 kernel: 8regs : 21624 MB/sec Feb 13 19:09:20.368475 kernel: 32regs : 21664 MB/sec Feb 13 19:09:20.369044 kernel: arm64_neon : 27785 MB/sec Feb 13 19:09:20.369057 kernel: xor: using function: arm64_neon (27785 MB/sec) Feb 13 19:09:20.421870 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:09:20.432903 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:09:20.443029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:20.454763 systemd-udevd[459]: Using default interface naming scheme 'v255'. Feb 13 19:09:20.457895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:20.461593 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:09:20.476173 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Feb 13 19:09:20.503559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:09:20.517013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:09:20.555629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:20.563042 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:09:20.575682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:09:20.577714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:09:20.579579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:20.582293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:09:20.592995 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:09:20.601217 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:09:20.612772 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:09:20.612916 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:09:20.612931 kernel: GPT:9289727 != 19775487 Feb 13 19:09:20.612941 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:09:20.612950 kernel: GPT:9289727 != 19775487 Feb 13 19:09:20.612967 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:09:20.612977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:20.602738 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:09:20.613810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:09:20.613939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:20.616184 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:20.618096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:09:20.618303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:20.620540 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:20.631270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:20.637887 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) Feb 13 19:09:20.637926 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Feb 13 19:09:20.641073 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:09:20.650907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:20.655896 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:09:20.659916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:09:20.661150 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:09:20.666650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:09:20.690037 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:09:20.691926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:20.696580 disk-uuid[553]: Primary Header is updated. Feb 13 19:09:20.696580 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:09:20.696580 disk-uuid[553]: Secondary Header is updated. Feb 13 19:09:20.706028 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:20.711946 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:21.734875 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:21.735744 disk-uuid[554]: The operation has completed successfully. Feb 13 19:09:21.764386 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:09:21.764492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:09:21.793061 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:09:21.795916 sh[574]: Success Feb 13 19:09:21.807877 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:09:21.840458 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:09:21.850330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:09:21.864910 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 19:09:21.864961 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:21.864973 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:09:21.864982 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:09:21.866396 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:09:21.869133 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:09:21.876005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:09:21.877162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:09:21.890452 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:09:21.892257 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:09:21.901503 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:09:21.901560 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:21.901572 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:21.905867 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:21.913916 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:09:21.915761 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:09:21.922883 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:09:21.933064 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:09:21.997020 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:09:22.010074 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:09:22.034484 systemd-networkd[765]: lo: Link UP Feb 13 19:09:22.034496 systemd-networkd[765]: lo: Gained carrier Feb 13 19:09:22.035439 systemd-networkd[765]: Enumeration completed Feb 13 19:09:22.035535 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:09:22.038132 ignition[665]: Ignition 2.20.0 Feb 13 19:09:22.035940 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:22.038138 ignition[665]: Stage: fetch-offline Feb 13 19:09:22.035944 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:09:22.038184 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:22.037350 systemd[1]: Reached target network.target - Network. Feb 13 19:09:22.038194 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:22.038433 systemd-networkd[765]: eth0: Link UP Feb 13 19:09:22.038377 ignition[665]: parsed url from cmdline: "" Feb 13 19:09:22.038437 systemd-networkd[765]: eth0: Gained carrier Feb 13 19:09:22.038381 ignition[665]: no config URL provided Feb 13 19:09:22.038445 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:22.038387 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:09:22.053892 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:09:22.038397 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:09:22.038431 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 19:09:22.038436 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:09:22.048738 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 19:09:22.092460 ignition[665]: parsing config with SHA512: 0bf8a8944c1dccd9363ecc2e903ccaa4f34fec97d858e5b1781a3bc21ba2e922b3e34cd1f3cd7acf55e987b0b7cbfe49364f79c2d3cdbf8a8c5906bf6b4e77ae Feb 13 19:09:22.097301 unknown[665]: fetched base config from "system" Feb 13 19:09:22.097311 unknown[665]: fetched user config from "qemu" Feb 13 19:09:22.097747 ignition[665]: fetch-offline: fetch-offline passed Feb 13 19:09:22.099745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:09:22.097861 ignition[665]: Ignition finished successfully Feb 13 19:09:22.101530 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:09:22.110079 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:09:22.121549 ignition[773]: Ignition 2.20.0 Feb 13 19:09:22.121560 ignition[773]: Stage: kargs Feb 13 19:09:22.121742 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:22.121752 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:22.124584 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:09:22.122766 ignition[773]: kargs: kargs passed Feb 13 19:09:22.122815 ignition[773]: Ignition finished successfully Feb 13 19:09:22.135004 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:09:22.144558 ignition[782]: Ignition 2.20.0 Feb 13 19:09:22.144568 ignition[782]: Stage: disks Feb 13 19:09:22.144730 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:22.147631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:09:22.144741 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:22.149317 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:09:22.145672 ignition[782]: disks: disks passed Feb 13 19:09:22.151156 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:09:22.145718 ignition[782]: Ignition finished successfully Feb 13 19:09:22.153706 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:09:22.155747 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:09:22.157348 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:09:22.167000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:09:22.177925 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:09:22.182689 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:09:22.184986 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:09:22.240890 kernel: EXT4-fs (vda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 19:09:22.241574 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:09:22.242980 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:09:22.254939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:09:22.256770 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:09:22.258291 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:09:22.258333 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:09:22.258356 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:09:22.267595 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:09:22.267619 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:09:22.262894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:09:22.273400 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:22.273424 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:22.273434 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:22.267189 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:09:22.275030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:09:22.320893 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:09:22.324022 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:09:22.328189 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:09:22.332388 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:09:22.425911 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:09:22.432938 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:09:22.435214 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:09:22.440856 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:09:22.457240 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:09:22.459191 ignition[915]: INFO : Ignition 2.20.0 Feb 13 19:09:22.459191 ignition[915]: INFO : Stage: mount Feb 13 19:09:22.459191 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:22.459191 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:22.459191 ignition[915]: INFO : mount: mount passed Feb 13 19:09:22.459191 ignition[915]: INFO : Ignition finished successfully Feb 13 19:09:22.459962 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:09:22.466950 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:09:22.862680 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:09:22.875014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:09:22.882865 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 19:09:22.885197 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:09:22.885218 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:22.885233 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:22.887853 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:22.889122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:09:22.911523 ignition[946]: INFO : Ignition 2.20.0 Feb 13 19:09:22.911523 ignition[946]: INFO : Stage: files Feb 13 19:09:22.913253 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:22.913253 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:22.913253 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:09:22.916864 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:09:22.916864 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:09:22.920190 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:09:22.921755 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:09:22.923251 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 19:09:22.924450 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:09:22.925803 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:09:22.927811 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:09:22.995719 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:09:23.348974 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:09:23.348974 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:09:23.352721 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:09:23.475153 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 19:09:23.561640 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:09:23.659971 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:09:23.659971 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:09:23.663507 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:09:23.833090 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:09:24.105725 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:09:24.105725 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:09:24.109506 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:09:24.149680 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:09:24.153871 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:09:24.156477 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:09:24.156477 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:09:24.156477 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:09:24.156477 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:09:24.156477 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:09:24.156477 ignition[946]: INFO : files: files passed Feb 13 19:09:24.156477 ignition[946]: INFO : Ignition finished successfully Feb 13 19:09:24.157343 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:09:24.169319 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:09:24.172376 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:09:24.173821 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:09:24.175851 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:09:24.180455 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:09:24.183140 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:24.183140 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:24.186280 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:24.189118 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:09:24.190574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:09:24.201040 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:09:24.222006 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:09:24.222114 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:09:24.224390 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:09:24.226176 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:09:24.227935 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:09:24.237991 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:09:24.251461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:09:24.262040 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:09:24.270157 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:24.271446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:24.273616 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:09:24.275462 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:09:24.275606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:09:24.278116 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:09:24.280135 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:09:24.281810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:09:24.283603 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:09:24.285625 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:09:24.287701 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:09:24.289605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:09:24.291636 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:09:24.293754 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:09:24.295565 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:09:24.297130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:09:24.297273 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:09:24.299672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:24.301708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:24.303725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:09:24.306908 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:24.308220 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:09:24.308360 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:09:24.311342 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:09:24.311464 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:09:24.313587 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:09:24.315252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:09:24.318892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:24.320269 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:09:24.322404 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:09:24.324049 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:09:24.324144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:09:24.325766 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:09:24.325864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:09:24.327498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:09:24.327613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:09:24.329462 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:09:24.329583 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:09:24.342046 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:09:24.342990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:09:24.343124 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:24.349080 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:09:24.349965 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:09:24.350101 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:24.352005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:09:24.352110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:09:24.357041 ignition[1001]: INFO : Ignition 2.20.0 Feb 13 19:09:24.357041 ignition[1001]: INFO : Stage: umount Feb 13 19:09:24.357041 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:24.357041 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:24.363527 ignition[1001]: INFO : umount: umount passed Feb 13 19:09:24.363527 ignition[1001]: INFO : Ignition finished successfully Feb 13 19:09:24.358660 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:09:24.358747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:09:24.360637 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:09:24.360710 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:09:24.363061 systemd[1]: Stopped target network.target - Network. Feb 13 19:09:24.366977 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:09:24.367044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:09:24.368692 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:09:24.368738 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:09:24.370619 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:09:24.370666 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:09:24.372686 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:09:24.372730 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:09:24.374783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:09:24.376601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:09:24.379186 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:09:24.382884 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 19:09:24.384489 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:09:24.385898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:09:24.388341 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:09:24.388401 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:24.401980 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:09:24.402899 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:09:24.402973 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:09:24.405157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:24.407263 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:09:24.408135 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:09:24.412218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:09:24.412277 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:24.413460 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:09:24.413506 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:24.415451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:09:24.415496 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:24.429185 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:09:24.429344 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:24.432007 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:09:24.432111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:09:24.438231 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:09:24.438302 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:24.439557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:09:24.439590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:24.441574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:09:24.441628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:09:24.444607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:09:24.444656 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:09:24.447422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:09:24.447473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:24.461005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:09:24.462151 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:09:24.462218 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:24.464386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:09:24.464438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:24.466681 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:09:24.466771 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:09:24.468589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:09:24.468663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:09:24.471213 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:09:24.472339 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:09:24.472402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:09:24.474927 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:09:24.487237 systemd[1]: Switching root. Feb 13 19:09:24.512111 systemd-journald[240]: Journal stopped Feb 13 19:09:25.303129 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Feb 13 19:09:25.303186 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:09:25.303199 kernel: SELinux: policy capability open_perms=1 Feb 13 19:09:25.303209 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:09:25.303226 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:09:25.303240 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:09:25.303251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:09:25.303260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:09:25.303270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:09:25.303280 kernel: audit: type=1403 audit(1739473764.707:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:09:25.303291 systemd[1]: Successfully loaded SELinux policy in 31.914ms. Feb 13 19:09:25.303307 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.858ms. Feb 13 19:09:25.303319 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:09:25.303334 systemd[1]: Detected virtualization kvm. Feb 13 19:09:25.303346 systemd[1]: Detected architecture arm64. Feb 13 19:09:25.303357 systemd[1]: Detected first boot. Feb 13 19:09:25.303367 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:09:25.303380 zram_generator::config[1047]: No configuration found. Feb 13 19:09:25.303392 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:09:25.303403 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:09:25.303413 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:09:25.303423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:09:25.303436 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:09:25.303447 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:09:25.303458 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:09:25.303468 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:09:25.303479 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:09:25.303490 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:09:25.303500 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:09:25.303510 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:09:25.303521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:25.303533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:25.303544 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:09:25.303554 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:09:25.303565 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:09:25.303575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:09:25.303586 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:09:25.303596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:25.303608 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:09:25.303618 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:09:25.303631 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:09:25.303642 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:09:25.303653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:25.303663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:09:25.303673 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:09:25.303684 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:09:25.303695 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:09:25.303705 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:09:25.303717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:25.303727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:25.303738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:25.303749 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:09:25.303759 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:09:25.303769 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:09:25.303780 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:09:25.303791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:09:25.303801 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:09:25.303814 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:09:25.303829 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:09:25.303852 systemd[1]: Reached target machines.target - Containers. Feb 13 19:09:25.303863 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:09:25.303874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:25.303884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:09:25.303895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:09:25.303905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:25.303918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:09:25.303928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:25.303939 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:09:25.303950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:25.303961 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:09:25.303971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:09:25.303981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:09:25.303991 kernel: loop: module loaded Feb 13 19:09:25.304002 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:09:25.304012 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:09:25.304022 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:09:25.304032 kernel: ACPI: bus type drm_connector registered Feb 13 19:09:25.304041 kernel: fuse: init (API version 7.39) Feb 13 19:09:25.304051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:09:25.304061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:09:25.304071 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:09:25.304098 systemd-journald[1118]: Collecting audit messages is disabled. Feb 13 19:09:25.304125 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:09:25.304136 systemd-journald[1118]: Journal started Feb 13 19:09:25.304162 systemd-journald[1118]: Runtime Journal (/run/log/journal/062c9b525fce4393adf5f19c46b642eb) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:09:25.087895 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:09:25.100452 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:09:25.100814 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:09:25.306199 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:09:25.306245 systemd[1]: Stopped verity-setup.service. Feb 13 19:09:25.310598 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:09:25.311296 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:09:25.312529 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:09:25.313822 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:09:25.314992 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:09:25.316459 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:09:25.317880 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:09:25.320884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:09:25.322521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:25.324154 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:09:25.324316 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:09:25.325808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:25.325977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:25.327577 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:09:25.327748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:09:25.329127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:25.329298 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:25.330779 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:09:25.331937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:09:25.333363 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:25.333495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:25.334983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:25.336601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:09:25.338127 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:09:25.351406 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:09:25.359965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:09:25.362558 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:09:25.363830 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:09:25.363941 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:09:25.365964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:09:25.369732 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:09:25.372062 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:09:25.373317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:25.375291 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:09:25.377807 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:09:25.379154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:09:25.382066 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:09:25.384553 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:09:25.386263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:09:25.394822 systemd-journald[1118]: Time spent on flushing to /var/log/journal/062c9b525fce4393adf5f19c46b642eb is 17.435ms for 859 entries. Feb 13 19:09:25.394822 systemd-journald[1118]: System Journal (/var/log/journal/062c9b525fce4393adf5f19c46b642eb) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:09:25.428761 systemd-journald[1118]: Received client request to flush runtime journal. Feb 13 19:09:25.428809 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:09:25.392082 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:09:25.399956 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:09:25.404367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:25.406085 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:09:25.407553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:09:25.409229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:09:25.410921 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:09:25.417679 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:09:25.429213 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:09:25.437076 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:09:25.439879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:09:25.440429 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:09:25.442098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:25.443731 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:09:25.449160 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:09:25.452423 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:09:25.475688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:09:25.477425 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:09:25.477440 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:09:25.477886 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:09:25.481797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:25.482940 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 19:09:25.519469 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 19:09:25.569880 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 19:09:25.576866 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 19:09:25.583874 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 19:09:25.590832 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:09:25.591331 (sd-merge)[1183]: Merged extensions into '/usr'. Feb 13 19:09:25.595507 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:09:25.595525 systemd[1]: Reloading... Feb 13 19:09:25.657572 zram_generator::config[1209]: No configuration found. Feb 13 19:09:25.728763 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:09:25.746496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:25.781787 systemd[1]: Reloading finished in 185 ms. Feb 13 19:09:25.808362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:09:25.809892 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:09:25.820111 systemd[1]: Starting ensure-sysext.service... Feb 13 19:09:25.821946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:09:25.831464 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:09:25.831478 systemd[1]: Reloading... Feb 13 19:09:25.846284 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:09:25.846489 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:09:25.847119 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:09:25.847331 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:09:25.847374 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:09:25.850525 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:09:25.850636 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:09:25.859027 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:09:25.859134 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:09:25.885872 zram_generator::config[1271]: No configuration found. Feb 13 19:09:25.963808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:25.999561 systemd[1]: Reloading finished in 167 ms. Feb 13 19:09:26.012870 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:09:26.026320 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:26.034475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:09:26.036995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:09:26.039407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:09:26.044260 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:09:26.050192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:26.055274 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:09:26.058831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:26.062172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:26.066673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:26.070157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:26.071470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:26.074290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:09:26.076318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:09:26.078192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:26.078350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:26.080142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:26.080291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:26.082426 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:26.082549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:26.091758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:26.092678 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 19:09:26.104163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:26.108154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:26.114530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:26.115620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:26.117499 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:09:26.118785 augenrules[1355]: No rules Feb 13 19:09:26.119167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:26.122240 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:09:26.122440 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:09:26.124413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:09:26.127567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:09:26.129310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:26.129440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:26.131036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:26.131157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:26.133049 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:26.133412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:26.135728 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:09:26.137627 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:09:26.166593 systemd[1]: Finished ensure-sysext.service. Feb 13 19:09:26.171772 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:09:26.182071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:09:26.185037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:26.190192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:26.194091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:09:26.197971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:26.202954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:26.204114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:26.205801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:09:26.212260 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:09:26.214698 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:09:26.215264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:26.215399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:26.217015 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:09:26.217199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:09:26.220056 augenrules[1386]: /sbin/augenrules: No change Feb 13 19:09:26.218639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:26.218753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:26.221746 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:26.221993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:26.224526 systemd-resolved[1310]: Positive Trust Anchors: Feb 13 19:09:26.224552 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:09:26.224585 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:09:26.226966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1343) Feb 13 19:09:26.230186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:09:26.230255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:09:26.232797 systemd-resolved[1310]: Defaulting to hostname 'linux'. Feb 13 19:09:26.233400 augenrules[1413]: No rules Feb 13 19:09:26.236828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:09:26.238674 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:09:26.238889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:09:26.244905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:26.268211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:09:26.281366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:09:26.304787 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:09:26.306684 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:09:26.317118 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:09:26.328112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:26.331647 systemd-networkd[1398]: lo: Link UP Feb 13 19:09:26.331654 systemd-networkd[1398]: lo: Gained carrier Feb 13 19:09:26.332553 systemd-networkd[1398]: Enumeration completed Feb 13 19:09:26.332703 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:09:26.334007 systemd[1]: Reached target network.target - Network. Feb 13 19:09:26.336246 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:09:26.338294 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:26.338298 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:09:26.339145 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:09:26.341202 systemd-networkd[1398]: eth0: Link UP Feb 13 19:09:26.341206 systemd-networkd[1398]: eth0: Gained carrier Feb 13 19:09:26.341230 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:26.343037 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:09:26.360168 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:09:26.360898 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:09:26.361930 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 19:09:26.362888 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:09:26.362953 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 19:09:26.150984 UTC. Feb 13 19:09:26.378953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:26.393957 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:09:26.395488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:26.396868 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:09:26.398066 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:09:26.399368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:09:26.400854 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:09:26.402044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:09:26.403360 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:09:26.404652 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:09:26.404755 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:09:26.405717 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:09:26.408888 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:09:26.411480 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:09:26.421867 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:09:26.424325 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:09:26.426005 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:09:26.427256 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:09:26.428333 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:09:26.429568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:09:26.429661 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:09:26.430649 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:09:26.432277 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:09:26.433057 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:09:26.436108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:09:26.442064 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:09:26.444832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:09:26.445946 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:09:26.452706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:09:26.453830 jq[1442]: false Feb 13 19:09:26.456118 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:09:26.461022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:09:26.465117 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:09:26.468871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:09:26.469287 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:09:26.470595 extend-filesystems[1443]: Found loop3 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found loop4 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found loop5 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda1 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda2 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda3 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found usr Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda4 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda6 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda7 Feb 13 19:09:26.472679 extend-filesystems[1443]: Found vda9 Feb 13 19:09:26.472679 extend-filesystems[1443]: Checking size of /dev/vda9 Feb 13 19:09:26.471488 dbus-daemon[1441]: [system] SELinux support is enabled Feb 13 19:09:26.473054 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:09:26.476380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:09:26.479042 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:09:26.483985 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:09:26.492349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:09:26.492519 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:09:26.492775 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:09:26.493407 jq[1460]: true Feb 13 19:09:26.493956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:09:26.499390 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:09:26.499550 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:09:26.504639 extend-filesystems[1443]: Resized partition /dev/vda9 Feb 13 19:09:26.510801 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:09:26.519903 jq[1466]: true Feb 13 19:09:26.523620 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:09:26.529793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:09:26.529821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:09:26.534979 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:09:26.535008 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:09:26.537593 tar[1463]: linux-arm64/helm Feb 13 19:09:26.538576 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:09:26.545776 update_engine[1457]: I20250213 19:09:26.545631 1457 main.cc:92] Flatcar Update Engine starting Feb 13 19:09:26.553383 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:09:26.557161 update_engine[1457]: I20250213 19:09:26.554131 1457 update_check_scheduler.cc:74] Next update check in 12m0s Feb 13 19:09:26.558869 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:09:26.562862 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1365) Feb 13 19:09:26.566764 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:09:26.567170 systemd-logind[1454]: New seat seat0. Feb 13 19:09:26.570976 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:09:26.578394 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:09:26.594978 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:09:26.594978 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:09:26.594978 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:09:26.602566 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Feb 13 19:09:26.598998 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:09:26.601032 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:09:26.612661 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:09:26.613747 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:09:26.616931 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:09:26.715481 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:09:26.785569 containerd[1475]: time="2025-02-13T19:09:26.785434160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:09:26.813890 containerd[1475]: time="2025-02-13T19:09:26.813789680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815459 containerd[1475]: time="2025-02-13T19:09:26.815291680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815459 containerd[1475]: time="2025-02-13T19:09:26.815325400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:09:26.815459 containerd[1475]: time="2025-02-13T19:09:26.815341280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:09:26.815582 containerd[1475]: time="2025-02-13T19:09:26.815490160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:09:26.815582 containerd[1475]: time="2025-02-13T19:09:26.815507080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815582 containerd[1475]: time="2025-02-13T19:09:26.815561240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815582 containerd[1475]: time="2025-02-13T19:09:26.815575640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815768 containerd[1475]: time="2025-02-13T19:09:26.815730560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815768 containerd[1475]: time="2025-02-13T19:09:26.815754040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815768 containerd[1475]: time="2025-02-13T19:09:26.815767200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815825 containerd[1475]: time="2025-02-13T19:09:26.815775960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.815883 containerd[1475]: time="2025-02-13T19:09:26.815865800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.816478 containerd[1475]: time="2025-02-13T19:09:26.816062680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:26.816478 containerd[1475]: time="2025-02-13T19:09:26.816168360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:26.816478 containerd[1475]: time="2025-02-13T19:09:26.816182560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:09:26.816478 containerd[1475]: time="2025-02-13T19:09:26.816267120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:09:26.816478 containerd[1475]: time="2025-02-13T19:09:26.816315560Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:09:26.820092 containerd[1475]: time="2025-02-13T19:09:26.820063120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:09:26.820158 containerd[1475]: time="2025-02-13T19:09:26.820120320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:09:26.820158 containerd[1475]: time="2025-02-13T19:09:26.820135640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:09:26.820158 containerd[1475]: time="2025-02-13T19:09:26.820150920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:09:26.820308 containerd[1475]: time="2025-02-13T19:09:26.820165680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:09:26.820354 containerd[1475]: time="2025-02-13T19:09:26.820343360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:09:26.820624 containerd[1475]: time="2025-02-13T19:09:26.820606520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:09:26.820726 containerd[1475]: time="2025-02-13T19:09:26.820709160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:09:26.820756 containerd[1475]: time="2025-02-13T19:09:26.820730360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:09:26.820756 containerd[1475]: time="2025-02-13T19:09:26.820746560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:09:26.820798 containerd[1475]: time="2025-02-13T19:09:26.820764680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820798 containerd[1475]: time="2025-02-13T19:09:26.820777920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820798 containerd[1475]: time="2025-02-13T19:09:26.820790240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820887 containerd[1475]: time="2025-02-13T19:09:26.820802440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820887 containerd[1475]: time="2025-02-13T19:09:26.820816040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820887 containerd[1475]: time="2025-02-13T19:09:26.820827880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820887 containerd[1475]: time="2025-02-13T19:09:26.820868080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820887 containerd[1475]: time="2025-02-13T19:09:26.820881840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820903120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820916840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820930400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820942080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820953120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.820967 containerd[1475]: time="2025-02-13T19:09:26.820966200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.820978400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.820991040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.821003040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.821017200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.821030680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.821044520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821069 containerd[1475]: time="2025-02-13T19:09:26.821057760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821178 containerd[1475]: time="2025-02-13T19:09:26.821074760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:09:26.821178 containerd[1475]: time="2025-02-13T19:09:26.821094960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821178 containerd[1475]: time="2025-02-13T19:09:26.821108680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821178 containerd[1475]: time="2025-02-13T19:09:26.821119040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821300560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821321000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821332240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821343800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821352840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821364920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821374320Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:09:26.821378 containerd[1475]: time="2025-02-13T19:09:26.821384160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:09:26.821741 containerd[1475]: time="2025-02-13T19:09:26.821660480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:09:26.821741 containerd[1475]: time="2025-02-13T19:09:26.821706280Z" level=info msg="Connect containerd service" Feb 13 19:09:26.821741 containerd[1475]: time="2025-02-13T19:09:26.821732440Z" level=info msg="using legacy CRI server" Feb 13 19:09:26.821741 containerd[1475]: time="2025-02-13T19:09:26.821739280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:09:26.822045 containerd[1475]: time="2025-02-13T19:09:26.821982760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:09:26.822621 containerd[1475]: time="2025-02-13T19:09:26.822577320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.822884760Z" level=info msg="Start subscribing containerd event" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.823017800Z" level=info msg="Start recovering state" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.823091280Z" level=info msg="Start event monitor" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.823102840Z" level=info msg="Start snapshots syncer" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.823112600Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:09:26.823322 containerd[1475]: time="2025-02-13T19:09:26.823121720Z" level=info msg="Start streaming server" Feb 13 19:09:26.825511 containerd[1475]: time="2025-02-13T19:09:26.823442480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:09:26.825511 containerd[1475]: time="2025-02-13T19:09:26.825243320Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:09:26.826414 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:09:26.827924 containerd[1475]: time="2025-02-13T19:09:26.827894320Z" level=info msg="containerd successfully booted in 0.047878s" Feb 13 19:09:26.910884 tar[1463]: linux-arm64/LICENSE Feb 13 19:09:26.910884 tar[1463]: linux-arm64/README.md Feb 13 19:09:26.923195 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:09:27.826961 systemd-networkd[1398]: eth0: Gained IPv6LL Feb 13 19:09:27.830883 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:09:27.832532 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:09:27.846289 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:09:27.848801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:27.851032 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:09:27.868796 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:09:27.870916 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:09:27.872595 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:09:27.874489 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:09:28.315001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:28.319357 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:09:28.328257 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:09:28.347500 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:09:28.359142 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:09:28.364378 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:09:28.364603 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:09:28.368167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:09:28.383103 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:09:28.391172 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:09:28.393488 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:09:28.394953 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:09:28.396057 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:09:28.398907 systemd[1]: Startup finished in 578ms (kernel) + 5.004s (initrd) + 3.724s (userspace) = 9.307s. Feb 13 19:09:28.410757 agetty[1553]: failed to open credentials directory Feb 13 19:09:28.410808 agetty[1554]: failed to open credentials directory Feb 13 19:09:28.781140 kubelet[1538]: E0213 19:09:28.781042 1538 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:09:28.783675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:09:28.783815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:09:32.130414 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:09:32.131538 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Feb 13 19:09:32.205545 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.207453 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.216017 systemd-logind[1454]: New session 1 of user core. Feb 13 19:09:32.216942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:09:32.223064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:09:32.231533 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:09:32.234838 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:09:32.241132 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:09:32.321349 systemd[1573]: Queued start job for default target default.target. Feb 13 19:09:32.332752 systemd[1573]: Created slice app.slice - User Application Slice. Feb 13 19:09:32.332788 systemd[1573]: Reached target paths.target - Paths. Feb 13 19:09:32.332800 systemd[1573]: Reached target timers.target - Timers. Feb 13 19:09:32.333980 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:09:32.343245 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:09:32.343304 systemd[1573]: Reached target sockets.target - Sockets. Feb 13 19:09:32.343315 systemd[1573]: Reached target basic.target - Basic System. Feb 13 19:09:32.343352 systemd[1573]: Reached target default.target - Main User Target. Feb 13 19:09:32.343377 systemd[1573]: Startup finished in 97ms. Feb 13 19:09:32.343677 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:09:32.344909 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:09:32.405453 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:41044.service - OpenSSH per-connection server daemon (10.0.0.1:41044). Feb 13 19:09:32.444930 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 41044 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.446423 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.451111 systemd-logind[1454]: New session 2 of user core. Feb 13 19:09:32.462008 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:09:32.512876 sshd[1586]: Connection closed by 10.0.0.1 port 41044 Feb 13 19:09:32.512902 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:32.524057 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:41044.service: Deactivated successfully. Feb 13 19:09:32.525396 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:09:32.528808 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:09:32.529175 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:41054.service - OpenSSH per-connection server daemon (10.0.0.1:41054). Feb 13 19:09:32.530374 systemd-logind[1454]: Removed session 2. Feb 13 19:09:32.568070 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 41054 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.569308 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.572893 systemd-logind[1454]: New session 3 of user core. Feb 13 19:09:32.588045 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:09:32.635686 sshd[1593]: Connection closed by 10.0.0.1 port 41054 Feb 13 19:09:32.636154 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:32.653196 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:41054.service: Deactivated successfully. Feb 13 19:09:32.654527 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:09:32.656917 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:09:32.658204 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:43082.service - OpenSSH per-connection server daemon (10.0.0.1:43082). Feb 13 19:09:32.658905 systemd-logind[1454]: Removed session 3. Feb 13 19:09:32.698680 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 43082 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.700055 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.704186 systemd-logind[1454]: New session 4 of user core. Feb 13 19:09:32.714998 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:09:32.765342 sshd[1600]: Connection closed by 10.0.0.1 port 43082 Feb 13 19:09:32.765776 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:32.782245 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:43082.service: Deactivated successfully. Feb 13 19:09:32.784219 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:09:32.785519 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:09:32.786798 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:43096.service - OpenSSH per-connection server daemon (10.0.0.1:43096). Feb 13 19:09:32.787533 systemd-logind[1454]: Removed session 4. Feb 13 19:09:32.826899 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 43096 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.828100 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.832016 systemd-logind[1454]: New session 5 of user core. Feb 13 19:09:32.847054 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:09:32.904956 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:09:32.905256 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:32.918695 sudo[1608]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:32.920449 sshd[1607]: Connection closed by 10.0.0.1 port 43096 Feb 13 19:09:32.921703 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:32.928157 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:43096.service: Deactivated successfully. Feb 13 19:09:32.929550 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:09:32.931761 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:09:32.938232 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:43108.service - OpenSSH per-connection server daemon (10.0.0.1:43108). Feb 13 19:09:32.939429 systemd-logind[1454]: Removed session 5. Feb 13 19:09:32.975100 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 43108 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:32.976357 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:32.979760 systemd-logind[1454]: New session 6 of user core. Feb 13 19:09:32.989024 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:09:33.039559 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:09:33.039825 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:33.042992 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:33.047565 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:09:33.047864 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:33.071178 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:09:33.093188 augenrules[1639]: No rules Feb 13 19:09:33.094308 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:09:33.094487 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:09:33.095447 sudo[1616]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:33.096580 sshd[1615]: Connection closed by 10.0.0.1 port 43108 Feb 13 19:09:33.097069 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:33.112242 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:43108.service: Deactivated successfully. Feb 13 19:09:33.113566 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:09:33.115960 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:09:33.123212 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:43118.service - OpenSSH per-connection server daemon (10.0.0.1:43118). Feb 13 19:09:33.124128 systemd-logind[1454]: Removed session 6. Feb 13 19:09:33.159669 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 43118 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:09:33.160890 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:33.164884 systemd-logind[1454]: New session 7 of user core. Feb 13 19:09:33.172040 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:09:33.222765 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:09:33.223401 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:33.537131 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:09:33.537227 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:09:33.772603 dockerd[1671]: time="2025-02-13T19:09:33.772541769Z" level=info msg="Starting up" Feb 13 19:09:33.916136 dockerd[1671]: time="2025-02-13T19:09:33.916081064Z" level=info msg="Loading containers: start." Feb 13 19:09:34.054892 kernel: Initializing XFRM netlink socket Feb 13 19:09:34.120270 systemd-networkd[1398]: docker0: Link UP Feb 13 19:09:34.152187 dockerd[1671]: time="2025-02-13T19:09:34.152136958Z" level=info msg="Loading containers: done." Feb 13 19:09:34.164011 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck723385942-merged.mount: Deactivated successfully. Feb 13 19:09:34.168475 dockerd[1671]: time="2025-02-13T19:09:34.168368130Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:09:34.168555 dockerd[1671]: time="2025-02-13T19:09:34.168491554Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:09:34.168910 dockerd[1671]: time="2025-02-13T19:09:34.168733493Z" level=info msg="Daemon has completed initialization" Feb 13 19:09:34.197175 dockerd[1671]: time="2025-02-13T19:09:34.197119044Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:09:34.197374 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:09:34.904517 containerd[1475]: time="2025-02-13T19:09:34.904471855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:09:35.543391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502344048.mount: Deactivated successfully. Feb 13 19:09:36.492714 containerd[1475]: time="2025-02-13T19:09:36.492664499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:36.493595 containerd[1475]: time="2025-02-13T19:09:36.493338017Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:09:36.494287 containerd[1475]: time="2025-02-13T19:09:36.494248079Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:36.497276 containerd[1475]: time="2025-02-13T19:09:36.497244927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:36.498465 containerd[1475]: time="2025-02-13T19:09:36.498372246Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.593859201s" Feb 13 19:09:36.498465 containerd[1475]: time="2025-02-13T19:09:36.498412920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:09:36.517707 containerd[1475]: time="2025-02-13T19:09:36.517666915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:09:38.076780 containerd[1475]: time="2025-02-13T19:09:38.076721646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:38.077553 containerd[1475]: time="2025-02-13T19:09:38.077499042Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:09:38.078459 containerd[1475]: time="2025-02-13T19:09:38.078430454Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:38.081922 containerd[1475]: time="2025-02-13T19:09:38.081881067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:38.082524 containerd[1475]: time="2025-02-13T19:09:38.082484251Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.564777398s" Feb 13 19:09:38.082566 containerd[1475]: time="2025-02-13T19:09:38.082522894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:09:38.100480 containerd[1475]: time="2025-02-13T19:09:38.100427569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:09:38.857953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:09:38.867025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:38.956940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:38.960536 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:09:39.071971 kubelet[1960]: E0213 19:09:39.071928 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:09:39.075410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:09:39.075685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:09:39.254809 containerd[1475]: time="2025-02-13T19:09:39.254701511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:39.255797 containerd[1475]: time="2025-02-13T19:09:39.255494780Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:09:39.256487 containerd[1475]: time="2025-02-13T19:09:39.256452171Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:39.259437 containerd[1475]: time="2025-02-13T19:09:39.259405665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:39.260711 containerd[1475]: time="2025-02-13T19:09:39.260670488Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.160206814s" Feb 13 19:09:39.260711 containerd[1475]: time="2025-02-13T19:09:39.260707172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:09:39.279492 containerd[1475]: time="2025-02-13T19:09:39.279414853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:09:40.373709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149434281.mount: Deactivated successfully. Feb 13 19:09:40.560826 containerd[1475]: time="2025-02-13T19:09:40.560760615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:40.561269 containerd[1475]: time="2025-02-13T19:09:40.561221018Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:09:40.561994 containerd[1475]: time="2025-02-13T19:09:40.561968715Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:40.563700 containerd[1475]: time="2025-02-13T19:09:40.563649560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:40.564438 containerd[1475]: time="2025-02-13T19:09:40.564404821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.284956383s" Feb 13 19:09:40.564438 containerd[1475]: time="2025-02-13T19:09:40.564437309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:09:40.582644 containerd[1475]: time="2025-02-13T19:09:40.582607099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:09:41.352622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989180447.mount: Deactivated successfully. Feb 13 19:09:42.024300 containerd[1475]: time="2025-02-13T19:09:42.023836753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.025220 containerd[1475]: time="2025-02-13T19:09:42.024885994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:09:42.025864 containerd[1475]: time="2025-02-13T19:09:42.025820087Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.029220 containerd[1475]: time="2025-02-13T19:09:42.029189334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.030534 containerd[1475]: time="2025-02-13T19:09:42.030505219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.447857142s" Feb 13 19:09:42.030641 containerd[1475]: time="2025-02-13T19:09:42.030625189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:09:42.047771 containerd[1475]: time="2025-02-13T19:09:42.047740940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:09:42.495328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202218615.mount: Deactivated successfully. Feb 13 19:09:42.504879 containerd[1475]: time="2025-02-13T19:09:42.504704942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.505592 containerd[1475]: time="2025-02-13T19:09:42.505552425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:09:42.506329 containerd[1475]: time="2025-02-13T19:09:42.506259053Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.508290 containerd[1475]: time="2025-02-13T19:09:42.508230031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:42.509080 containerd[1475]: time="2025-02-13T19:09:42.509052922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 461.277586ms" Feb 13 19:09:42.509139 containerd[1475]: time="2025-02-13T19:09:42.509086442Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:09:42.527326 containerd[1475]: time="2025-02-13T19:09:42.527291091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:09:43.026026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354657014.mount: Deactivated successfully. Feb 13 19:09:44.842474 containerd[1475]: time="2025-02-13T19:09:44.842074608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:44.842991 containerd[1475]: time="2025-02-13T19:09:44.842520625Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:09:44.843459 containerd[1475]: time="2025-02-13T19:09:44.843429534Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:44.847372 containerd[1475]: time="2025-02-13T19:09:44.847320987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:44.848697 containerd[1475]: time="2025-02-13T19:09:44.848577024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.321249258s" Feb 13 19:09:44.848697 containerd[1475]: time="2025-02-13T19:09:44.848609814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:09:49.106643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:09:49.119050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:49.209867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:49.214033 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:09:49.252284 kubelet[2183]: E0213 19:09:49.252230 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:09:49.255034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:09:49.255187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:09:49.490085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:49.502105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:49.521786 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-7.scope)... Feb 13 19:09:49.521957 systemd[1]: Reloading... Feb 13 19:09:49.592795 zram_generator::config[2237]: No configuration found. Feb 13 19:09:49.723313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:49.777715 systemd[1]: Reloading finished in 255 ms. Feb 13 19:09:49.818886 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:49.823001 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:09:49.824939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:49.826803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:50.070078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:50.077403 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:09:50.122219 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:09:50.122219 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:09:50.122219 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:09:50.126853 kubelet[2284]: I0213 19:09:50.126783 2284 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:09:50.809732 kubelet[2284]: I0213 19:09:50.809684 2284 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:09:50.809732 kubelet[2284]: I0213 19:09:50.809716 2284 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:09:50.809985 kubelet[2284]: I0213 19:09:50.809918 2284 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:09:50.838680 kubelet[2284]: E0213 19:09:50.838630 2284 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.838680 kubelet[2284]: I0213 19:09:50.838674 2284 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:09:50.848870 kubelet[2284]: I0213 19:09:50.848191 2284 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:09:50.848870 kubelet[2284]: I0213 19:09:50.848557 2284 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:09:50.848870 kubelet[2284]: I0213 19:09:50.848579 2284 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:09:50.848870 kubelet[2284]: I0213 19:09:50.848801 2284 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:09:50.849089 kubelet[2284]: I0213 19:09:50.848810 2284 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:09:50.849089 kubelet[2284]: I0213 19:09:50.849083 2284 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:09:50.849969 kubelet[2284]: I0213 19:09:50.849942 2284 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:09:50.849969 kubelet[2284]: I0213 19:09:50.849967 2284 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:09:50.850289 kubelet[2284]: I0213 19:09:50.850265 2284 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:09:50.850443 kubelet[2284]: I0213 19:09:50.850423 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:09:50.850980 kubelet[2284]: W0213 19:09:50.850746 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.850980 kubelet[2284]: E0213 19:09:50.850797 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.850980 kubelet[2284]: W0213 19:09:50.850901 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.850980 kubelet[2284]: E0213 19:09:50.850953 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.852203 kubelet[2284]: I0213 19:09:50.852181 2284 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:09:50.854667 kubelet[2284]: I0213 19:09:50.854633 2284 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:09:50.854794 kubelet[2284]: W0213 19:09:50.854772 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:09:50.856092 kubelet[2284]: I0213 19:09:50.856072 2284 server.go:1264] "Started kubelet" Feb 13 19:09:50.857952 kubelet[2284]: I0213 19:09:50.857672 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:09:50.859558 kubelet[2284]: I0213 19:09:50.858342 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:09:50.859558 kubelet[2284]: I0213 19:09:50.858640 2284 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:09:50.861131 kubelet[2284]: I0213 19:09:50.859953 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:09:50.861131 kubelet[2284]: I0213 19:09:50.860150 2284 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:09:50.862112 kubelet[2284]: E0213 19:09:50.861462 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:09:50.862112 kubelet[2284]: I0213 19:09:50.861864 2284 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:09:50.862112 kubelet[2284]: I0213 19:09:50.861992 2284 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:09:50.863495 kubelet[2284]: W0213 19:09:50.863446 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.863579 kubelet[2284]: E0213 19:09:50.863502 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.863632 kubelet[2284]: E0213 19:09:50.863610 2284 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:09:50.864443 kubelet[2284]: E0213 19:09:50.864294 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Feb 13 19:09:50.864584 kubelet[2284]: E0213 19:09:50.860458 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823da3258139c1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:09:50.856043547 +0000 UTC m=+0.774757220,LastTimestamp:2025-02-13 19:09:50.856043547 +0000 UTC m=+0.774757220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:09:50.864786 kubelet[2284]: I0213 19:09:50.864760 2284 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:09:50.864897 kubelet[2284]: I0213 19:09:50.864866 2284 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:09:50.864988 kubelet[2284]: I0213 19:09:50.864972 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:09:50.866056 kubelet[2284]: I0213 19:09:50.866018 2284 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:09:50.876884 kubelet[2284]: I0213 19:09:50.876789 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:09:50.877916 kubelet[2284]: I0213 19:09:50.877799 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:09:50.877978 kubelet[2284]: I0213 19:09:50.877968 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:09:50.877999 kubelet[2284]: I0213 19:09:50.877988 2284 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:09:50.878065 kubelet[2284]: E0213 19:09:50.878029 2284 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:09:50.881513 kubelet[2284]: W0213 19:09:50.881375 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.881513 kubelet[2284]: E0213 19:09:50.881429 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:50.882714 kubelet[2284]: I0213 19:09:50.882653 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:09:50.882714 kubelet[2284]: I0213 19:09:50.882665 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:09:50.882714 kubelet[2284]: I0213 19:09:50.882681 2284 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:09:50.899237 kubelet[2284]: I0213 19:09:50.899211 2284 policy_none.go:49] "None policy: Start" Feb 13 19:09:50.899926 kubelet[2284]: I0213 19:09:50.899907 2284 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:09:50.899989 kubelet[2284]: I0213 19:09:50.899971 2284 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:09:50.905888 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:09:50.922358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:09:50.924984 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:09:50.938072 kubelet[2284]: I0213 19:09:50.937891 2284 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:09:50.938157 kubelet[2284]: I0213 19:09:50.938088 2284 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:09:50.938461 kubelet[2284]: I0213 19:09:50.938188 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:09:50.939562 kubelet[2284]: E0213 19:09:50.939526 2284 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:09:50.963902 kubelet[2284]: I0213 19:09:50.963883 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:09:50.964328 kubelet[2284]: E0213 19:09:50.964303 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 13 19:09:50.978594 kubelet[2284]: I0213 19:09:50.978543 2284 topology_manager.go:215] "Topology Admit Handler" podUID="0f2e878b1115ff9a426ba0e35bfcf2b7" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:09:50.979578 kubelet[2284]: I0213 19:09:50.979557 2284 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:09:50.980462 kubelet[2284]: I0213 19:09:50.980438 2284 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:09:50.986368 systemd[1]: Created slice kubepods-burstable-pod0f2e878b1115ff9a426ba0e35bfcf2b7.slice - libcontainer container kubepods-burstable-pod0f2e878b1115ff9a426ba0e35bfcf2b7.slice. Feb 13 19:09:51.007983 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:09:51.021112 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:09:51.065172 kubelet[2284]: E0213 19:09:51.065063 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Feb 13 19:09:51.066170 kubelet[2284]: I0213 19:09:51.066128 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:51.066170 kubelet[2284]: I0213 19:09:51.066164 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:51.066237 kubelet[2284]: I0213 19:09:51.066183 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:51.066237 kubelet[2284]: I0213 19:09:51.066205 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:51.066237 kubelet[2284]: I0213 19:09:51.066225 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:09:51.066296 kubelet[2284]: I0213 19:09:51.066239 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:51.066296 kubelet[2284]: I0213 19:09:51.066254 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:51.066496 kubelet[2284]: I0213 19:09:51.066437 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:51.066496 kubelet[2284]: I0213 19:09:51.066459 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:51.166158 kubelet[2284]: I0213 19:09:51.166131 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:09:51.166671 kubelet[2284]: E0213 19:09:51.166421 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 13 19:09:51.307097 kubelet[2284]: E0213 19:09:51.307015 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.307867 containerd[1475]: time="2025-02-13T19:09:51.307789392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f2e878b1115ff9a426ba0e35bfcf2b7,Namespace:kube-system,Attempt:0,}" Feb 13 19:09:51.319986 kubelet[2284]: E0213 19:09:51.319901 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.320312 containerd[1475]: time="2025-02-13T19:09:51.320276493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:09:51.322939 kubelet[2284]: E0213 19:09:51.322913 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.323257 containerd[1475]: time="2025-02-13T19:09:51.323220209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:09:51.466073 kubelet[2284]: E0213 19:09:51.466022 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Feb 13 19:09:51.568553 kubelet[2284]: I0213 19:09:51.568527 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:09:51.568987 kubelet[2284]: E0213 19:09:51.568960 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 13 19:09:51.784605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233527253.mount: Deactivated successfully. Feb 13 19:09:51.790006 containerd[1475]: time="2025-02-13T19:09:51.789964753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:09:51.791872 containerd[1475]: time="2025-02-13T19:09:51.791817442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:09:51.792537 containerd[1475]: time="2025-02-13T19:09:51.792432461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:09:51.793671 containerd[1475]: time="2025-02-13T19:09:51.793644559Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:09:51.794299 containerd[1475]: time="2025-02-13T19:09:51.794152013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:09:51.794981 containerd[1475]: time="2025-02-13T19:09:51.794956988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:09:51.795799 containerd[1475]: time="2025-02-13T19:09:51.795756329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:09:51.797361 containerd[1475]: time="2025-02-13T19:09:51.797326562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:09:51.799484 containerd[1475]: time="2025-02-13T19:09:51.799451718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.175289ms" Feb 13 19:09:51.800353 containerd[1475]: time="2025-02-13T19:09:51.800207546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.32042ms" Feb 13 19:09:51.802413 containerd[1475]: time="2025-02-13T19:09:51.802383328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.041665ms" Feb 13 19:09:51.840448 kubelet[2284]: W0213 19:09:51.840379 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:51.840448 kubelet[2284]: E0213 19:09:51.840447 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:51.923106 containerd[1475]: time="2025-02-13T19:09:51.922685408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:09:51.923106 containerd[1475]: time="2025-02-13T19:09:51.922887711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:09:51.923106 containerd[1475]: time="2025-02-13T19:09:51.922971381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.923319 containerd[1475]: time="2025-02-13T19:09:51.923183593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.923909932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.923957241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.923972185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.924034159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.924565428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.924601429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.924611338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.925496 containerd[1475]: time="2025-02-13T19:09:51.924664921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:09:51.944044 systemd[1]: Started cri-containerd-8572dca60fae35e76b1008171234ba95215e23e11e3703dbbff23fd65a5c7bf8.scope - libcontainer container 8572dca60fae35e76b1008171234ba95215e23e11e3703dbbff23fd65a5c7bf8. Feb 13 19:09:51.948016 systemd[1]: Started cri-containerd-474a9c1cc49d6ed05ec51a59fb4e6579b79d25337a108672f87b4051f9522a1b.scope - libcontainer container 474a9c1cc49d6ed05ec51a59fb4e6579b79d25337a108672f87b4051f9522a1b. Feb 13 19:09:51.949778 systemd[1]: Started cri-containerd-cf5aa63c76d9a4d492fe29ce2ff4fdf9d2107fa1d6704350015a8bb909c3a388.scope - libcontainer container cf5aa63c76d9a4d492fe29ce2ff4fdf9d2107fa1d6704350015a8bb909c3a388. Feb 13 19:09:51.979861 containerd[1475]: time="2025-02-13T19:09:51.979797194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"8572dca60fae35e76b1008171234ba95215e23e11e3703dbbff23fd65a5c7bf8\"" Feb 13 19:09:51.981437 kubelet[2284]: E0213 19:09:51.981180 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.983576 containerd[1475]: time="2025-02-13T19:09:51.983541730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f2e878b1115ff9a426ba0e35bfcf2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf5aa63c76d9a4d492fe29ce2ff4fdf9d2107fa1d6704350015a8bb909c3a388\"" Feb 13 19:09:51.984525 kubelet[2284]: E0213 19:09:51.984505 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.984797 containerd[1475]: time="2025-02-13T19:09:51.984706878Z" level=info msg="CreateContainer within sandbox \"8572dca60fae35e76b1008171234ba95215e23e11e3703dbbff23fd65a5c7bf8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:09:51.987339 containerd[1475]: time="2025-02-13T19:09:51.987303887Z" level=info msg="CreateContainer within sandbox \"cf5aa63c76d9a4d492fe29ce2ff4fdf9d2107fa1d6704350015a8bb909c3a388\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:09:51.987495 containerd[1475]: time="2025-02-13T19:09:51.987376729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"474a9c1cc49d6ed05ec51a59fb4e6579b79d25337a108672f87b4051f9522a1b\"" Feb 13 19:09:51.988160 kubelet[2284]: E0213 19:09:51.988140 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:51.990268 containerd[1475]: time="2025-02-13T19:09:51.990242649Z" level=info msg="CreateContainer within sandbox \"474a9c1cc49d6ed05ec51a59fb4e6579b79d25337a108672f87b4051f9522a1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:09:52.000892 containerd[1475]: time="2025-02-13T19:09:52.000836065Z" level=info msg="CreateContainer within sandbox \"8572dca60fae35e76b1008171234ba95215e23e11e3703dbbff23fd65a5c7bf8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75b7bdeb23c4d2940753269994aa0349861b3f5f0f22116a10592a077e359df5\"" Feb 13 19:09:52.001418 containerd[1475]: time="2025-02-13T19:09:52.001384709Z" level=info msg="StartContainer for \"75b7bdeb23c4d2940753269994aa0349861b3f5f0f22116a10592a077e359df5\"" Feb 13 19:09:52.005034 containerd[1475]: time="2025-02-13T19:09:52.004994115Z" level=info msg="CreateContainer within sandbox \"cf5aa63c76d9a4d492fe29ce2ff4fdf9d2107fa1d6704350015a8bb909c3a388\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7a6b8ff469d1607960f30bf57ff666669edd8d5ae28f0ccb8c86aabfe85691a\"" Feb 13 19:09:52.005441 containerd[1475]: time="2025-02-13T19:09:52.005415559Z" level=info msg="StartContainer for \"d7a6b8ff469d1607960f30bf57ff666669edd8d5ae28f0ccb8c86aabfe85691a\"" Feb 13 19:09:52.008222 containerd[1475]: time="2025-02-13T19:09:52.008181598Z" level=info msg="CreateContainer within sandbox \"474a9c1cc49d6ed05ec51a59fb4e6579b79d25337a108672f87b4051f9522a1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7a001b69afda557138d0f21a188cf2ef70770d8c83a0bbd8457983280444d64\"" Feb 13 19:09:52.008605 containerd[1475]: time="2025-02-13T19:09:52.008583581Z" level=info msg="StartContainer for \"c7a001b69afda557138d0f21a188cf2ef70770d8c83a0bbd8457983280444d64\"" Feb 13 19:09:52.023081 systemd[1]: Started cri-containerd-75b7bdeb23c4d2940753269994aa0349861b3f5f0f22116a10592a077e359df5.scope - libcontainer container 75b7bdeb23c4d2940753269994aa0349861b3f5f0f22116a10592a077e359df5. Feb 13 19:09:52.027325 systemd[1]: Started cri-containerd-d7a6b8ff469d1607960f30bf57ff666669edd8d5ae28f0ccb8c86aabfe85691a.scope - libcontainer container d7a6b8ff469d1607960f30bf57ff666669edd8d5ae28f0ccb8c86aabfe85691a. Feb 13 19:09:52.040987 systemd[1]: Started cri-containerd-c7a001b69afda557138d0f21a188cf2ef70770d8c83a0bbd8457983280444d64.scope - libcontainer container c7a001b69afda557138d0f21a188cf2ef70770d8c83a0bbd8457983280444d64. Feb 13 19:09:52.069808 containerd[1475]: time="2025-02-13T19:09:52.069130056Z" level=info msg="StartContainer for \"75b7bdeb23c4d2940753269994aa0349861b3f5f0f22116a10592a077e359df5\" returns successfully" Feb 13 19:09:52.084865 containerd[1475]: time="2025-02-13T19:09:52.084808796Z" level=info msg="StartContainer for \"d7a6b8ff469d1607960f30bf57ff666669edd8d5ae28f0ccb8c86aabfe85691a\" returns successfully" Feb 13 19:09:52.085107 containerd[1475]: time="2025-02-13T19:09:52.085063596Z" level=info msg="StartContainer for \"c7a001b69afda557138d0f21a188cf2ef70770d8c83a0bbd8457983280444d64\" returns successfully" Feb 13 19:09:52.108984 kubelet[2284]: W0213 19:09:52.108929 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:52.109147 kubelet[2284]: E0213 19:09:52.109133 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:52.223517 kubelet[2284]: W0213 19:09:52.223461 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:52.223908 kubelet[2284]: E0213 19:09:52.223889 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 13 19:09:52.267781 kubelet[2284]: E0213 19:09:52.267732 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Feb 13 19:09:52.370452 kubelet[2284]: I0213 19:09:52.370428 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:09:52.888448 kubelet[2284]: E0213 19:09:52.888422 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:52.892855 kubelet[2284]: E0213 19:09:52.889379 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:52.892855 kubelet[2284]: E0213 19:09:52.890762 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:53.855853 kubelet[2284]: I0213 19:09:53.855798 2284 apiserver.go:52] "Watching apiserver" Feb 13 19:09:53.857853 kubelet[2284]: I0213 19:09:53.857807 2284 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:09:53.863210 kubelet[2284]: I0213 19:09:53.862312 2284 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:09:53.905311 kubelet[2284]: E0213 19:09:53.905275 2284 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:53.906016 kubelet[2284]: E0213 19:09:53.905990 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:55.563223 systemd[1]: Reloading requested from client PID 2565 ('systemctl') (unit session-7.scope)... Feb 13 19:09:55.563239 systemd[1]: Reloading... Feb 13 19:09:55.627879 zram_generator::config[2608]: No configuration found. Feb 13 19:09:55.707950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:55.773910 systemd[1]: Reloading finished in 210 ms. Feb 13 19:09:55.804679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:55.815663 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:09:55.815920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:55.815979 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 118.4M memory peak, 0B memory swap peak. Feb 13 19:09:55.826149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:55.917356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:55.921159 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:09:55.958805 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:09:55.958805 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:09:55.958805 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:09:55.959183 kubelet[2646]: I0213 19:09:55.958875 2646 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:09:55.964003 kubelet[2646]: I0213 19:09:55.963965 2646 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:09:55.964003 kubelet[2646]: I0213 19:09:55.963996 2646 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:09:55.964207 kubelet[2646]: I0213 19:09:55.964192 2646 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:09:55.965590 kubelet[2646]: I0213 19:09:55.965568 2646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:09:55.966759 kubelet[2646]: I0213 19:09:55.966737 2646 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:09:55.976684 kubelet[2646]: I0213 19:09:55.976660 2646 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:09:55.976970 kubelet[2646]: I0213 19:09:55.976921 2646 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:09:55.977174 kubelet[2646]: I0213 19:09:55.976965 2646 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:09:55.977174 kubelet[2646]: I0213 19:09:55.977172 2646 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:09:55.977272 kubelet[2646]: I0213 19:09:55.977182 2646 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:09:55.977272 kubelet[2646]: I0213 19:09:55.977215 2646 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:09:55.977356 kubelet[2646]: I0213 19:09:55.977343 2646 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:09:55.977380 kubelet[2646]: I0213 19:09:55.977360 2646 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:09:55.977504 kubelet[2646]: I0213 19:09:55.977483 2646 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:09:55.977504 kubelet[2646]: I0213 19:09:55.977505 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:09:55.978700 kubelet[2646]: I0213 19:09:55.978681 2646 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:09:55.979167 kubelet[2646]: I0213 19:09:55.979149 2646 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:09:55.983872 kubelet[2646]: I0213 19:09:55.983210 2646 server.go:1264] "Started kubelet" Feb 13 19:09:55.985939 kubelet[2646]: I0213 19:09:55.984147 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:09:55.985939 kubelet[2646]: I0213 19:09:55.984325 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:09:55.985939 kubelet[2646]: I0213 19:09:55.984398 2646 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:09:55.985939 kubelet[2646]: I0213 19:09:55.984432 2646 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:09:55.986656 kubelet[2646]: I0213 19:09:55.986633 2646 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:09:55.989440 kubelet[2646]: I0213 19:09:55.988255 2646 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:09:55.989440 kubelet[2646]: I0213 19:09:55.988831 2646 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:09:55.989440 kubelet[2646]: I0213 19:09:55.989014 2646 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:09:55.999384 kubelet[2646]: E0213 19:09:55.999333 2646 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:09:56.001586 kubelet[2646]: I0213 19:09:55.999510 2646 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:09:56.001586 kubelet[2646]: I0213 19:09:55.999575 2646 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:09:56.001877 kubelet[2646]: I0213 19:09:56.001753 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:09:56.008268 kubelet[2646]: I0213 19:09:56.008236 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:09:56.009504 kubelet[2646]: I0213 19:09:56.009439 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:09:56.009504 kubelet[2646]: I0213 19:09:56.009476 2646 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:09:56.009504 kubelet[2646]: I0213 19:09:56.009494 2646 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:09:56.010066 kubelet[2646]: E0213 19:09:56.009563 2646 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:09:56.029971 kubelet[2646]: I0213 19:09:56.029943 2646 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:09:56.029971 kubelet[2646]: I0213 19:09:56.029963 2646 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:09:56.030105 kubelet[2646]: I0213 19:09:56.029984 2646 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:09:56.030161 kubelet[2646]: I0213 19:09:56.030143 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:09:56.030192 kubelet[2646]: I0213 19:09:56.030160 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:09:56.030192 kubelet[2646]: I0213 19:09:56.030179 2646 policy_none.go:49] "None policy: Start" Feb 13 19:09:56.030817 kubelet[2646]: I0213 19:09:56.030774 2646 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:09:56.030817 kubelet[2646]: I0213 19:09:56.030810 2646 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:09:56.030994 kubelet[2646]: I0213 19:09:56.030969 2646 state_mem.go:75] "Updated machine memory state" Feb 13 19:09:56.034672 kubelet[2646]: I0213 19:09:56.034622 2646 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:09:56.034889 kubelet[2646]: I0213 19:09:56.034777 2646 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:09:56.034889 kubelet[2646]: I0213 19:09:56.034891 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:09:56.095257 kubelet[2646]: I0213 19:09:56.094501 2646 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:09:56.104280 kubelet[2646]: I0213 19:09:56.104239 2646 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:09:56.104428 kubelet[2646]: I0213 19:09:56.104322 2646 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:09:56.109793 kubelet[2646]: I0213 19:09:56.109730 2646 topology_manager.go:215] "Topology Admit Handler" podUID="0f2e878b1115ff9a426ba0e35bfcf2b7" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:09:56.109932 kubelet[2646]: I0213 19:09:56.109887 2646 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:09:56.109958 kubelet[2646]: I0213 19:09:56.109943 2646 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:09:56.289553 kubelet[2646]: I0213 19:09:56.289512 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:56.289553 kubelet[2646]: I0213 19:09:56.289555 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:56.289718 kubelet[2646]: I0213 19:09:56.289589 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:56.289718 kubelet[2646]: I0213 19:09:56.289607 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:09:56.289718 kubelet[2646]: I0213 19:09:56.289623 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:56.289718 kubelet[2646]: I0213 19:09:56.289637 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:56.289718 kubelet[2646]: I0213 19:09:56.289651 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f2e878b1115ff9a426ba0e35bfcf2b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f2e878b1115ff9a426ba0e35bfcf2b7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:56.289829 kubelet[2646]: I0213 19:09:56.289665 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:56.289829 kubelet[2646]: I0213 19:09:56.289681 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:56.419473 kubelet[2646]: E0213 19:09:56.419225 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:56.419473 kubelet[2646]: E0213 19:09:56.419366 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:56.419832 kubelet[2646]: E0213 19:09:56.419796 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:56.563258 sudo[2682]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:09:56.563535 sudo[2682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:09:56.980218 kubelet[2646]: I0213 19:09:56.978838 2646 apiserver.go:52] "Watching apiserver" Feb 13 19:09:56.988196 sudo[2682]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:56.989608 kubelet[2646]: I0213 19:09:56.989575 2646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:09:57.020021 kubelet[2646]: E0213 19:09:57.019402 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:57.024711 kubelet[2646]: E0213 19:09:57.024679 2646 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:09:57.025218 kubelet[2646]: E0213 19:09:57.025163 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:57.028439 kubelet[2646]: E0213 19:09:57.028411 2646 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:09:57.029494 kubelet[2646]: E0213 19:09:57.029422 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:57.050596 kubelet[2646]: I0213 19:09:57.050403 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.050385932 podStartE2EDuration="1.050385932s" podCreationTimestamp="2025-02-13 19:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:09:57.050324081 +0000 UTC m=+1.126066246" watchObservedRunningTime="2025-02-13 19:09:57.050385932 +0000 UTC m=+1.126128097" Feb 13 19:09:57.050959 kubelet[2646]: I0213 19:09:57.050568 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.050562726 podStartE2EDuration="1.050562726s" podCreationTimestamp="2025-02-13 19:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:09:57.039337217 +0000 UTC m=+1.115079382" watchObservedRunningTime="2025-02-13 19:09:57.050562726 +0000 UTC m=+1.126304891" Feb 13 19:09:57.070108 kubelet[2646]: I0213 19:09:57.069920 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.069903684 podStartE2EDuration="1.069903684s" podCreationTimestamp="2025-02-13 19:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:09:57.061007892 +0000 UTC m=+1.136750057" watchObservedRunningTime="2025-02-13 19:09:57.069903684 +0000 UTC m=+1.145645849" Feb 13 19:09:58.020552 kubelet[2646]: E0213 19:09:58.020467 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:58.020922 kubelet[2646]: E0213 19:09:58.020644 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:09:58.461725 sudo[1651]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:58.463814 sshd[1650]: Connection closed by 10.0.0.1 port 43118 Feb 13 19:09:58.464496 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:58.468513 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:43118.service: Deactivated successfully. Feb 13 19:09:58.471035 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:09:58.471191 systemd[1]: session-7.scope: Consumed 6.776s CPU time, 194.0M memory peak, 0B memory swap peak. Feb 13 19:09:58.471615 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:09:58.472607 systemd-logind[1454]: Removed session 7. Feb 13 19:09:59.021721 kubelet[2646]: E0213 19:09:59.021687 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:00.022953 kubelet[2646]: E0213 19:10:00.022915 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:01.198410 kubelet[2646]: E0213 19:10:01.197929 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:01.935955 kubelet[2646]: E0213 19:10:01.935831 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:08.595162 kubelet[2646]: E0213 19:10:08.595115 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:09.037224 kubelet[2646]: E0213 19:10:09.037171 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:11.205764 kubelet[2646]: E0213 19:10:11.205724 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:11.478141 kubelet[2646]: I0213 19:10:11.478026 2646 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:10:11.483651 containerd[1475]: time="2025-02-13T19:10:11.483594586Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:10:11.483997 kubelet[2646]: I0213 19:10:11.483928 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:10:11.944331 kubelet[2646]: E0213 19:10:11.944289 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.092594 update_engine[1457]: I20250213 19:10:12.092510 1457 update_attempter.cc:509] Updating boot flags... Feb 13 19:10:12.143386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2732) Feb 13 19:10:12.374045 kubelet[2646]: I0213 19:10:12.373988 2646 topology_manager.go:215] "Topology Admit Handler" podUID="d109591a-dafb-4354-841a-46d8369060bf" podNamespace="kube-system" podName="cilium-operator-599987898-wpkw8" Feb 13 19:10:12.385645 kubelet[2646]: I0213 19:10:12.385433 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wsh\" (UniqueName: \"kubernetes.io/projected/d109591a-dafb-4354-841a-46d8369060bf-kube-api-access-k6wsh\") pod \"cilium-operator-599987898-wpkw8\" (UID: \"d109591a-dafb-4354-841a-46d8369060bf\") " pod="kube-system/cilium-operator-599987898-wpkw8" Feb 13 19:10:12.385645 kubelet[2646]: I0213 19:10:12.385656 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d109591a-dafb-4354-841a-46d8369060bf-cilium-config-path\") pod \"cilium-operator-599987898-wpkw8\" (UID: \"d109591a-dafb-4354-841a-46d8369060bf\") " pod="kube-system/cilium-operator-599987898-wpkw8" Feb 13 19:10:12.385991 systemd[1]: Created slice kubepods-besteffort-podd109591a_dafb_4354_841a_46d8369060bf.slice - libcontainer container kubepods-besteffort-podd109591a_dafb_4354_841a_46d8369060bf.slice. Feb 13 19:10:12.603550 kubelet[2646]: I0213 19:10:12.603507 2646 topology_manager.go:215] "Topology Admit Handler" podUID="609813f1-2f11-426c-aa26-dce25e184fe4" podNamespace="kube-system" podName="kube-proxy-fm24g" Feb 13 19:10:12.611273 kubelet[2646]: I0213 19:10:12.610231 2646 topology_manager.go:215] "Topology Admit Handler" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" podNamespace="kube-system" podName="cilium-qd25c" Feb 13 19:10:12.614765 systemd[1]: Created slice kubepods-besteffort-pod609813f1_2f11_426c_aa26_dce25e184fe4.slice - libcontainer container kubepods-besteffort-pod609813f1_2f11_426c_aa26_dce25e184fe4.slice. Feb 13 19:10:12.643503 systemd[1]: Created slice kubepods-burstable-pod7ef57c1c_47f5_4f73_ba0b_b1ce9a2e9b4a.slice - libcontainer container kubepods-burstable-pod7ef57c1c_47f5_4f73_ba0b_b1ce9a2e9b4a.slice. Feb 13 19:10:12.687231 kubelet[2646]: I0213 19:10:12.687174 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/609813f1-2f11-426c-aa26-dce25e184fe4-kube-proxy\") pod \"kube-proxy-fm24g\" (UID: \"609813f1-2f11-426c-aa26-dce25e184fe4\") " pod="kube-system/kube-proxy-fm24g" Feb 13 19:10:12.687231 kubelet[2646]: I0213 19:10:12.687229 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hostproc\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687365 kubelet[2646]: I0213 19:10:12.687247 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-net\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687365 kubelet[2646]: I0213 19:10:12.687265 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/609813f1-2f11-426c-aa26-dce25e184fe4-xtables-lock\") pod \"kube-proxy-fm24g\" (UID: \"609813f1-2f11-426c-aa26-dce25e184fe4\") " pod="kube-system/kube-proxy-fm24g" Feb 13 19:10:12.687365 kubelet[2646]: I0213 19:10:12.687283 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hubble-tls\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687365 kubelet[2646]: I0213 19:10:12.687299 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cni-path\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687365 kubelet[2646]: I0213 19:10:12.687315 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-config-path\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687331 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/609813f1-2f11-426c-aa26-dce25e184fe4-kube-api-access-62swh\") pod \"kube-proxy-fm24g\" (UID: \"609813f1-2f11-426c-aa26-dce25e184fe4\") " pod="kube-system/kube-proxy-fm24g" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687348 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-run\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687364 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-etc-cni-netd\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687383 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-lib-modules\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687402 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-bpf-maps\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687478 kubelet[2646]: I0213 19:10:12.687417 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-cgroup\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687597 kubelet[2646]: I0213 19:10:12.687445 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-xtables-lock\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687597 kubelet[2646]: I0213 19:10:12.687459 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-kernel\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687597 kubelet[2646]: I0213 19:10:12.687474 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdg8d\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-kube-api-access-rdg8d\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.687597 kubelet[2646]: I0213 19:10:12.687488 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/609813f1-2f11-426c-aa26-dce25e184fe4-lib-modules\") pod \"kube-proxy-fm24g\" (UID: \"609813f1-2f11-426c-aa26-dce25e184fe4\") " pod="kube-system/kube-proxy-fm24g" Feb 13 19:10:12.687597 kubelet[2646]: I0213 19:10:12.687505 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-clustermesh-secrets\") pod \"cilium-qd25c\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " pod="kube-system/cilium-qd25c" Feb 13 19:10:12.697614 kubelet[2646]: E0213 19:10:12.697373 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.702993 containerd[1475]: time="2025-02-13T19:10:12.702945952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wpkw8,Uid:d109591a-dafb-4354-841a-46d8369060bf,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:12.725242 containerd[1475]: time="2025-02-13T19:10:12.725110718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:12.725413 containerd[1475]: time="2025-02-13T19:10:12.725262759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:12.725413 containerd[1475]: time="2025-02-13T19:10:12.725317080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.725522 containerd[1475]: time="2025-02-13T19:10:12.725441881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.749066 systemd[1]: Started cri-containerd-dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184.scope - libcontainer container dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184. Feb 13 19:10:12.774901 containerd[1475]: time="2025-02-13T19:10:12.774832659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wpkw8,Uid:d109591a-dafb-4354-841a-46d8369060bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\"" Feb 13 19:10:12.775653 kubelet[2646]: E0213 19:10:12.775630 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.777167 containerd[1475]: time="2025-02-13T19:10:12.777086840Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:10:12.923602 kubelet[2646]: E0213 19:10:12.923493 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.923996 containerd[1475]: time="2025-02-13T19:10:12.923856523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm24g,Uid:609813f1-2f11-426c-aa26-dce25e184fe4,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:12.944459 containerd[1475]: time="2025-02-13T19:10:12.944366873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:12.944459 containerd[1475]: time="2025-02-13T19:10:12.944419713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:12.944459 containerd[1475]: time="2025-02-13T19:10:12.944431073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.944666 containerd[1475]: time="2025-02-13T19:10:12.944499954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.947015 kubelet[2646]: E0213 19:10:12.946976 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.948138 containerd[1475]: time="2025-02-13T19:10:12.947807345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qd25c,Uid:7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:12.970024 systemd[1]: Started cri-containerd-52643909c246b7afa77614561077402616796acf0866ca29108a0b95c2f2c854.scope - libcontainer container 52643909c246b7afa77614561077402616796acf0866ca29108a0b95c2f2c854. Feb 13 19:10:12.977037 containerd[1475]: time="2025-02-13T19:10:12.976924055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:12.977037 containerd[1475]: time="2025-02-13T19:10:12.976977416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:12.977037 containerd[1475]: time="2025-02-13T19:10:12.976989136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.977307 containerd[1475]: time="2025-02-13T19:10:12.977074776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:12.997010 systemd[1]: Started cri-containerd-7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b.scope - libcontainer container 7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b. Feb 13 19:10:12.997770 containerd[1475]: time="2025-02-13T19:10:12.997633487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fm24g,Uid:609813f1-2f11-426c-aa26-dce25e184fe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"52643909c246b7afa77614561077402616796acf0866ca29108a0b95c2f2c854\"" Feb 13 19:10:12.998302 kubelet[2646]: E0213 19:10:12.998280 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:13.001390 containerd[1475]: time="2025-02-13T19:10:13.001250881Z" level=info msg="CreateContainer within sandbox \"52643909c246b7afa77614561077402616796acf0866ca29108a0b95c2f2c854\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:10:13.017331 containerd[1475]: time="2025-02-13T19:10:13.017201782Z" level=info msg="CreateContainer within sandbox \"52643909c246b7afa77614561077402616796acf0866ca29108a0b95c2f2c854\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcd10f9e4dd8c61e69f26f1e84c16688225136a794f5a07ed1a907daf4688fa0\"" Feb 13 19:10:13.021683 containerd[1475]: time="2025-02-13T19:10:13.021616341Z" level=info msg="StartContainer for \"fcd10f9e4dd8c61e69f26f1e84c16688225136a794f5a07ed1a907daf4688fa0\"" Feb 13 19:10:13.023419 containerd[1475]: time="2025-02-13T19:10:13.023393636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qd25c,Uid:7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\"" Feb 13 19:10:13.024759 kubelet[2646]: E0213 19:10:13.024553 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:13.051999 systemd[1]: Started cri-containerd-fcd10f9e4dd8c61e69f26f1e84c16688225136a794f5a07ed1a907daf4688fa0.scope - libcontainer container fcd10f9e4dd8c61e69f26f1e84c16688225136a794f5a07ed1a907daf4688fa0. Feb 13 19:10:13.077140 containerd[1475]: time="2025-02-13T19:10:13.077030870Z" level=info msg="StartContainer for \"fcd10f9e4dd8c61e69f26f1e84c16688225136a794f5a07ed1a907daf4688fa0\" returns successfully" Feb 13 19:10:14.054704 kubelet[2646]: E0213 19:10:14.054402 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:15.051827 kubelet[2646]: E0213 19:10:15.051745 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:15.816501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932989802.mount: Deactivated successfully. Feb 13 19:10:16.083068 containerd[1475]: time="2025-02-13T19:10:16.082955761Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:16.084136 containerd[1475]: time="2025-02-13T19:10:16.083436565Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:10:16.084316 containerd[1475]: time="2025-02-13T19:10:16.084291292Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:16.085888 containerd[1475]: time="2025-02-13T19:10:16.085829743Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.308654462s" Feb 13 19:10:16.086074 containerd[1475]: time="2025-02-13T19:10:16.085973064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:10:16.088911 containerd[1475]: time="2025-02-13T19:10:16.088551404Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:10:16.089869 containerd[1475]: time="2025-02-13T19:10:16.089824294Z" level=info msg="CreateContainer within sandbox \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:10:16.119112 containerd[1475]: time="2025-02-13T19:10:16.119061317Z" level=info msg="CreateContainer within sandbox \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\"" Feb 13 19:10:16.119765 containerd[1475]: time="2025-02-13T19:10:16.119615162Z" level=info msg="StartContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\"" Feb 13 19:10:16.145054 systemd[1]: Started cri-containerd-9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6.scope - libcontainer container 9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6. Feb 13 19:10:16.183423 containerd[1475]: time="2025-02-13T19:10:16.183345489Z" level=info msg="StartContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" returns successfully" Feb 13 19:10:17.056668 kubelet[2646]: E0213 19:10:17.056233 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:17.068407 kubelet[2646]: I0213 19:10:17.068334 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fm24g" podStartSLOduration=5.06831267 podStartE2EDuration="5.06831267s" podCreationTimestamp="2025-02-13 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:14.072339351 +0000 UTC m=+18.148081516" watchObservedRunningTime="2025-02-13 19:10:17.06831267 +0000 UTC m=+21.144054835" Feb 13 19:10:18.057896 kubelet[2646]: E0213 19:10:18.057867 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:20.260759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532617901.mount: Deactivated successfully. Feb 13 19:10:21.921486 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Feb 13 19:10:21.993674 sshd[3102]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:21.995563 sshd-session[3102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:22.000605 systemd-logind[1454]: New session 8 of user core. Feb 13 19:10:22.010204 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:10:22.158411 sshd[3106]: Connection closed by 10.0.0.1 port 53926 Feb 13 19:10:22.158694 sshd-session[3102]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:22.163825 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:53926.service: Deactivated successfully. Feb 13 19:10:22.167866 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:10:22.168615 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:10:22.169742 systemd-logind[1454]: Removed session 8. Feb 13 19:10:22.725610 containerd[1475]: time="2025-02-13T19:10:22.725526282Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:22.726859 containerd[1475]: time="2025-02-13T19:10:22.726806570Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:10:22.727943 containerd[1475]: time="2025-02-13T19:10:22.727915216Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:22.729610 containerd[1475]: time="2025-02-13T19:10:22.729580986Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.640990342s" Feb 13 19:10:22.729610 containerd[1475]: time="2025-02-13T19:10:22.729612426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:10:22.732666 containerd[1475]: time="2025-02-13T19:10:22.732629724Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:10:22.756073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996129017.mount: Deactivated successfully. Feb 13 19:10:22.757727 containerd[1475]: time="2025-02-13T19:10:22.757664911Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\"" Feb 13 19:10:22.758625 containerd[1475]: time="2025-02-13T19:10:22.758159994Z" level=info msg="StartContainer for \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\"" Feb 13 19:10:22.790063 systemd[1]: Started cri-containerd-e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff.scope - libcontainer container e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff. Feb 13 19:10:22.874504 containerd[1475]: time="2025-02-13T19:10:22.874447555Z" level=info msg="StartContainer for \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\" returns successfully" Feb 13 19:10:22.879129 systemd[1]: cri-containerd-e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff.scope: Deactivated successfully. Feb 13 19:10:23.023431 containerd[1475]: time="2025-02-13T19:10:23.006060685Z" level=info msg="shim disconnected" id=e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff namespace=k8s.io Feb 13 19:10:23.023431 containerd[1475]: time="2025-02-13T19:10:23.023347022Z" level=warning msg="cleaning up after shim disconnected" id=e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff namespace=k8s.io Feb 13 19:10:23.023431 containerd[1475]: time="2025-02-13T19:10:23.023364142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:23.069733 kubelet[2646]: E0213 19:10:23.069695 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.072532 containerd[1475]: time="2025-02-13T19:10:23.072475818Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:10:23.086710 kubelet[2646]: I0213 19:10:23.086632 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wpkw8" podStartSLOduration=7.77491393 podStartE2EDuration="11.086617337s" podCreationTimestamp="2025-02-13 19:10:12 +0000 UTC" firstStartedPulling="2025-02-13 19:10:12.776650036 +0000 UTC m=+16.852392161" lastFinishedPulling="2025-02-13 19:10:16.088353403 +0000 UTC m=+20.164095568" observedRunningTime="2025-02-13 19:10:17.06828411 +0000 UTC m=+21.144026275" watchObservedRunningTime="2025-02-13 19:10:23.086617337 +0000 UTC m=+27.162359462" Feb 13 19:10:23.091194 containerd[1475]: time="2025-02-13T19:10:23.091142803Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\"" Feb 13 19:10:23.092133 containerd[1475]: time="2025-02-13T19:10:23.091959887Z" level=info msg="StartContainer for \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\"" Feb 13 19:10:23.118106 systemd[1]: Started cri-containerd-690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127.scope - libcontainer container 690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127. Feb 13 19:10:23.139319 containerd[1475]: time="2025-02-13T19:10:23.139186633Z" level=info msg="StartContainer for \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\" returns successfully" Feb 13 19:10:23.176985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:10:23.178048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:10:23.178119 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:10:23.187945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:10:23.188404 systemd[1]: cri-containerd-690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127.scope: Deactivated successfully. Feb 13 19:10:23.212423 containerd[1475]: time="2025-02-13T19:10:23.212296284Z" level=info msg="shim disconnected" id=690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127 namespace=k8s.io Feb 13 19:10:23.212423 containerd[1475]: time="2025-02-13T19:10:23.212354884Z" level=warning msg="cleaning up after shim disconnected" id=690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127 namespace=k8s.io Feb 13 19:10:23.212423 containerd[1475]: time="2025-02-13T19:10:23.212366924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:23.214818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:10:23.754115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff-rootfs.mount: Deactivated successfully. Feb 13 19:10:24.073855 kubelet[2646]: E0213 19:10:24.073360 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:24.076905 containerd[1475]: time="2025-02-13T19:10:24.076128844Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:10:24.100859 containerd[1475]: time="2025-02-13T19:10:24.100807378Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\"" Feb 13 19:10:24.101514 containerd[1475]: time="2025-02-13T19:10:24.101492341Z" level=info msg="StartContainer for \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\"" Feb 13 19:10:24.133044 systemd[1]: Started cri-containerd-b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8.scope - libcontainer container b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8. Feb 13 19:10:24.179322 containerd[1475]: time="2025-02-13T19:10:24.177667993Z" level=info msg="StartContainer for \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\" returns successfully" Feb 13 19:10:24.213927 systemd[1]: cri-containerd-b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8.scope: Deactivated successfully. Feb 13 19:10:24.241338 containerd[1475]: time="2025-02-13T19:10:24.241257776Z" level=info msg="shim disconnected" id=b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8 namespace=k8s.io Feb 13 19:10:24.241338 containerd[1475]: time="2025-02-13T19:10:24.241333217Z" level=warning msg="cleaning up after shim disconnected" id=b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8 namespace=k8s.io Feb 13 19:10:24.241338 containerd[1475]: time="2025-02-13T19:10:24.241342137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:24.754124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8-rootfs.mount: Deactivated successfully. Feb 13 19:10:25.078560 kubelet[2646]: E0213 19:10:25.077888 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:25.080128 containerd[1475]: time="2025-02-13T19:10:25.080083610Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:10:25.118618 containerd[1475]: time="2025-02-13T19:10:25.118474930Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\"" Feb 13 19:10:25.126428 containerd[1475]: time="2025-02-13T19:10:25.126112929Z" level=info msg="StartContainer for \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\"" Feb 13 19:10:25.161059 systemd[1]: Started cri-containerd-9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a.scope - libcontainer container 9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a. Feb 13 19:10:25.187774 systemd[1]: cri-containerd-9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a.scope: Deactivated successfully. Feb 13 19:10:25.190848 containerd[1475]: time="2025-02-13T19:10:25.190733385Z" level=info msg="StartContainer for \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\" returns successfully" Feb 13 19:10:25.210729 containerd[1475]: time="2025-02-13T19:10:25.210654768Z" level=info msg="shim disconnected" id=9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a namespace=k8s.io Feb 13 19:10:25.210729 containerd[1475]: time="2025-02-13T19:10:25.210712769Z" level=warning msg="cleaning up after shim disconnected" id=9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a namespace=k8s.io Feb 13 19:10:25.210729 containerd[1475]: time="2025-02-13T19:10:25.210722689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:25.754135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a-rootfs.mount: Deactivated successfully. Feb 13 19:10:26.088466 kubelet[2646]: E0213 19:10:26.088338 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:26.094105 containerd[1475]: time="2025-02-13T19:10:26.094047658Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:10:26.111645 containerd[1475]: time="2025-02-13T19:10:26.111587466Z" level=info msg="CreateContainer within sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\"" Feb 13 19:10:26.112477 containerd[1475]: time="2025-02-13T19:10:26.112444270Z" level=info msg="StartContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\"" Feb 13 19:10:26.140059 systemd[1]: Started cri-containerd-8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb.scope - libcontainer container 8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb. Feb 13 19:10:26.168983 containerd[1475]: time="2025-02-13T19:10:26.168811912Z" level=info msg="StartContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" returns successfully" Feb 13 19:10:26.291235 kubelet[2646]: I0213 19:10:26.291201 2646 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:10:26.322355 kubelet[2646]: I0213 19:10:26.321551 2646 topology_manager.go:215] "Topology Admit Handler" podUID="58bb50b5-3e34-4567-9e6f-5f28ca610aeb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4vw9k" Feb 13 19:10:26.322355 kubelet[2646]: I0213 19:10:26.322325 2646 topology_manager.go:215] "Topology Admit Handler" podUID="ae629d2e-ef4f-438d-ae71-9ff980257be1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bw9ct" Feb 13 19:10:26.333305 systemd[1]: Created slice kubepods-burstable-pod58bb50b5_3e34_4567_9e6f_5f28ca610aeb.slice - libcontainer container kubepods-burstable-pod58bb50b5_3e34_4567_9e6f_5f28ca610aeb.slice. Feb 13 19:10:26.342810 systemd[1]: Created slice kubepods-burstable-podae629d2e_ef4f_438d_ae71_9ff980257be1.slice - libcontainer container kubepods-burstable-podae629d2e_ef4f_438d_ae71_9ff980257be1.slice. Feb 13 19:10:26.381485 kubelet[2646]: I0213 19:10:26.381440 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrjw\" (UniqueName: \"kubernetes.io/projected/ae629d2e-ef4f-438d-ae71-9ff980257be1-kube-api-access-vcrjw\") pod \"coredns-7db6d8ff4d-bw9ct\" (UID: \"ae629d2e-ef4f-438d-ae71-9ff980257be1\") " pod="kube-system/coredns-7db6d8ff4d-bw9ct" Feb 13 19:10:26.381485 kubelet[2646]: I0213 19:10:26.381491 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bb50b5-3e34-4567-9e6f-5f28ca610aeb-config-volume\") pod \"coredns-7db6d8ff4d-4vw9k\" (UID: \"58bb50b5-3e34-4567-9e6f-5f28ca610aeb\") " pod="kube-system/coredns-7db6d8ff4d-4vw9k" Feb 13 19:10:26.381673 kubelet[2646]: I0213 19:10:26.381510 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnxr\" (UniqueName: \"kubernetes.io/projected/58bb50b5-3e34-4567-9e6f-5f28ca610aeb-kube-api-access-mjnxr\") pod \"coredns-7db6d8ff4d-4vw9k\" (UID: \"58bb50b5-3e34-4567-9e6f-5f28ca610aeb\") " pod="kube-system/coredns-7db6d8ff4d-4vw9k" Feb 13 19:10:26.381673 kubelet[2646]: I0213 19:10:26.381529 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae629d2e-ef4f-438d-ae71-9ff980257be1-config-volume\") pod \"coredns-7db6d8ff4d-bw9ct\" (UID: \"ae629d2e-ef4f-438d-ae71-9ff980257be1\") " pod="kube-system/coredns-7db6d8ff4d-bw9ct" Feb 13 19:10:26.641615 kubelet[2646]: E0213 19:10:26.641460 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:26.642432 containerd[1475]: time="2025-02-13T19:10:26.642348239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4vw9k,Uid:58bb50b5-3e34-4567-9e6f-5f28ca610aeb,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:26.647110 kubelet[2646]: E0213 19:10:26.647079 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:26.649119 containerd[1475]: time="2025-02-13T19:10:26.647746546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bw9ct,Uid:ae629d2e-ef4f-438d-ae71-9ff980257be1,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:27.092654 kubelet[2646]: E0213 19:10:27.092508 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:27.174721 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:47638.service - OpenSSH per-connection server daemon (10.0.0.1:47638). Feb 13 19:10:27.222018 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 47638 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:27.223345 sshd-session[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:27.227549 systemd-logind[1454]: New session 9 of user core. Feb 13 19:10:27.236020 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:10:27.352973 sshd[3516]: Connection closed by 10.0.0.1 port 47638 Feb 13 19:10:27.353158 sshd-session[3514]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:27.355883 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:10:27.356034 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:47638.service: Deactivated successfully. Feb 13 19:10:27.357745 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:10:27.359453 systemd-logind[1454]: Removed session 9. Feb 13 19:10:28.104396 kubelet[2646]: E0213 19:10:28.104345 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:28.426979 systemd-networkd[1398]: cilium_host: Link UP Feb 13 19:10:28.427519 systemd-networkd[1398]: cilium_net: Link UP Feb 13 19:10:28.427525 systemd-networkd[1398]: cilium_net: Gained carrier Feb 13 19:10:28.427695 systemd-networkd[1398]: cilium_host: Gained carrier Feb 13 19:10:28.428293 systemd-networkd[1398]: cilium_net: Gained IPv6LL Feb 13 19:10:28.428548 systemd-networkd[1398]: cilium_host: Gained IPv6LL Feb 13 19:10:28.504944 systemd-networkd[1398]: cilium_vxlan: Link UP Feb 13 19:10:28.504951 systemd-networkd[1398]: cilium_vxlan: Gained carrier Feb 13 19:10:28.805966 kernel: NET: Registered PF_ALG protocol family Feb 13 19:10:29.097157 kubelet[2646]: E0213 19:10:29.097079 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:29.400321 systemd-networkd[1398]: lxc_health: Link UP Feb 13 19:10:29.407642 systemd-networkd[1398]: lxc_health: Gained carrier Feb 13 19:10:29.762403 systemd-networkd[1398]: lxc3aecc1e0aa4e: Link UP Feb 13 19:10:29.770282 systemd-networkd[1398]: lxcfedf859bbab2: Link UP Feb 13 19:10:29.778875 kernel: eth0: renamed from tmp5ba65 Feb 13 19:10:29.784868 kernel: eth0: renamed from tmp2719b Feb 13 19:10:29.788946 systemd-networkd[1398]: lxcfedf859bbab2: Gained carrier Feb 13 19:10:29.790857 systemd-networkd[1398]: lxc3aecc1e0aa4e: Gained carrier Feb 13 19:10:30.290953 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Feb 13 19:10:30.483076 systemd-networkd[1398]: lxc_health: Gained IPv6LL Feb 13 19:10:30.957881 kubelet[2646]: E0213 19:10:30.957312 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:30.973576 kubelet[2646]: I0213 19:10:30.972986 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qd25c" podStartSLOduration=9.267795136 podStartE2EDuration="18.972967115s" podCreationTimestamp="2025-02-13 19:10:12 +0000 UTC" firstStartedPulling="2025-02-13 19:10:13.025321413 +0000 UTC m=+17.101063578" lastFinishedPulling="2025-02-13 19:10:22.730493392 +0000 UTC m=+26.806235557" observedRunningTime="2025-02-13 19:10:27.106124339 +0000 UTC m=+31.181866504" watchObservedRunningTime="2025-02-13 19:10:30.972967115 +0000 UTC m=+35.048709280" Feb 13 19:10:31.058974 systemd-networkd[1398]: lxcfedf859bbab2: Gained IPv6LL Feb 13 19:10:31.060943 systemd-networkd[1398]: lxc3aecc1e0aa4e: Gained IPv6LL Feb 13 19:10:32.370637 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:47654.service - OpenSSH per-connection server daemon (10.0.0.1:47654). Feb 13 19:10:32.426686 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 47654 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:32.428321 sshd-session[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:32.433694 systemd-logind[1454]: New session 10 of user core. Feb 13 19:10:32.450066 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:10:32.571988 sshd[3908]: Connection closed by 10.0.0.1 port 47654 Feb 13 19:10:32.572564 sshd-session[3906]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:32.580714 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:47654.service: Deactivated successfully. Feb 13 19:10:32.583505 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:10:32.585508 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:10:32.591127 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:54448.service - OpenSSH per-connection server daemon (10.0.0.1:54448). Feb 13 19:10:32.593171 systemd-logind[1454]: Removed session 10. Feb 13 19:10:32.630204 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:32.631966 sshd-session[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:32.636376 systemd-logind[1454]: New session 11 of user core. Feb 13 19:10:32.646042 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:10:32.806379 sshd[3923]: Connection closed by 10.0.0.1 port 54448 Feb 13 19:10:32.805709 sshd-session[3921]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:32.816015 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:54448.service: Deactivated successfully. Feb 13 19:10:32.818547 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:10:32.822198 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:10:32.833705 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:54456.service - OpenSSH per-connection server daemon (10.0.0.1:54456). Feb 13 19:10:32.837400 systemd-logind[1454]: Removed session 11. Feb 13 19:10:32.878824 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 54456 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:32.879508 sshd-session[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:32.883505 systemd-logind[1454]: New session 12 of user core. Feb 13 19:10:32.893013 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:10:33.013770 sshd[3936]: Connection closed by 10.0.0.1 port 54456 Feb 13 19:10:33.015120 sshd-session[3934]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:33.018302 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:54456.service: Deactivated successfully. Feb 13 19:10:33.021268 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:10:33.022188 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:10:33.023560 systemd-logind[1454]: Removed session 12. Feb 13 19:10:33.482786 containerd[1475]: time="2025-02-13T19:10:33.482516870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:33.483181 containerd[1475]: time="2025-02-13T19:10:33.482874231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:33.483181 containerd[1475]: time="2025-02-13T19:10:33.482920912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:33.483181 containerd[1475]: time="2025-02-13T19:10:33.483051112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:33.490678 containerd[1475]: time="2025-02-13T19:10:33.490590902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:33.490678 containerd[1475]: time="2025-02-13T19:10:33.490651782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:33.490834 containerd[1475]: time="2025-02-13T19:10:33.490669302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:33.490964 containerd[1475]: time="2025-02-13T19:10:33.490893463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:33.514034 systemd[1]: Started cri-containerd-5ba65283982dd8f880d5bd95677093a867cb4daaa5a85c03905e541c95cc9f99.scope - libcontainer container 5ba65283982dd8f880d5bd95677093a867cb4daaa5a85c03905e541c95cc9f99. Feb 13 19:10:33.517155 systemd[1]: Started cri-containerd-2719bf32a547dfe97bd8dfd96694ed73367b4bd3a66a298da8d9a24bb038e906.scope - libcontainer container 2719bf32a547dfe97bd8dfd96694ed73367b4bd3a66a298da8d9a24bb038e906. Feb 13 19:10:33.528996 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:10:33.530055 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:10:33.551681 containerd[1475]: time="2025-02-13T19:10:33.551558702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bw9ct,Uid:ae629d2e-ef4f-438d-ae71-9ff980257be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ba65283982dd8f880d5bd95677093a867cb4daaa5a85c03905e541c95cc9f99\"" Feb 13 19:10:33.552634 kubelet[2646]: E0213 19:10:33.552402 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:33.555173 containerd[1475]: time="2025-02-13T19:10:33.554892955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4vw9k,Uid:58bb50b5-3e34-4567-9e6f-5f28ca610aeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2719bf32a547dfe97bd8dfd96694ed73367b4bd3a66a298da8d9a24bb038e906\"" Feb 13 19:10:33.555889 containerd[1475]: time="2025-02-13T19:10:33.555248317Z" level=info msg="CreateContainer within sandbox \"5ba65283982dd8f880d5bd95677093a867cb4daaa5a85c03905e541c95cc9f99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:10:33.556249 kubelet[2646]: E0213 19:10:33.556215 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:33.558540 containerd[1475]: time="2025-02-13T19:10:33.558505729Z" level=info msg="CreateContainer within sandbox \"2719bf32a547dfe97bd8dfd96694ed73367b4bd3a66a298da8d9a24bb038e906\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:10:33.574464 containerd[1475]: time="2025-02-13T19:10:33.574408592Z" level=info msg="CreateContainer within sandbox \"5ba65283982dd8f880d5bd95677093a867cb4daaa5a85c03905e541c95cc9f99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"630f871029784cff4ba9246175955f163597e7ba97985014d85ff4673a9aac75\"" Feb 13 19:10:33.574976 containerd[1475]: time="2025-02-13T19:10:33.574948914Z" level=info msg="StartContainer for \"630f871029784cff4ba9246175955f163597e7ba97985014d85ff4673a9aac75\"" Feb 13 19:10:33.577727 containerd[1475]: time="2025-02-13T19:10:33.577625885Z" level=info msg="CreateContainer within sandbox \"2719bf32a547dfe97bd8dfd96694ed73367b4bd3a66a298da8d9a24bb038e906\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"932a3c7d4dd2435b2ca531deee6af98d0522ced280db0fc314856ae9cf04682c\"" Feb 13 19:10:33.580248 containerd[1475]: time="2025-02-13T19:10:33.580001054Z" level=info msg="StartContainer for \"932a3c7d4dd2435b2ca531deee6af98d0522ced280db0fc314856ae9cf04682c\"" Feb 13 19:10:33.601013 systemd[1]: Started cri-containerd-630f871029784cff4ba9246175955f163597e7ba97985014d85ff4673a9aac75.scope - libcontainer container 630f871029784cff4ba9246175955f163597e7ba97985014d85ff4673a9aac75. Feb 13 19:10:33.604528 systemd[1]: Started cri-containerd-932a3c7d4dd2435b2ca531deee6af98d0522ced280db0fc314856ae9cf04682c.scope - libcontainer container 932a3c7d4dd2435b2ca531deee6af98d0522ced280db0fc314856ae9cf04682c. Feb 13 19:10:33.634795 containerd[1475]: time="2025-02-13T19:10:33.634655109Z" level=info msg="StartContainer for \"630f871029784cff4ba9246175955f163597e7ba97985014d85ff4673a9aac75\" returns successfully" Feb 13 19:10:33.634795 containerd[1475]: time="2025-02-13T19:10:33.634659269Z" level=info msg="StartContainer for \"932a3c7d4dd2435b2ca531deee6af98d0522ced280db0fc314856ae9cf04682c\" returns successfully" Feb 13 19:10:34.109255 kubelet[2646]: E0213 19:10:34.109206 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:34.112017 kubelet[2646]: E0213 19:10:34.111970 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:34.121019 kubelet[2646]: I0213 19:10:34.120865 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bw9ct" podStartSLOduration=22.120835451 podStartE2EDuration="22.120835451s" podCreationTimestamp="2025-02-13 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:34.120158408 +0000 UTC m=+38.195900573" watchObservedRunningTime="2025-02-13 19:10:34.120835451 +0000 UTC m=+38.196577576" Feb 13 19:10:34.143478 kubelet[2646]: I0213 19:10:34.143408 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4vw9k" podStartSLOduration=22.143393097 podStartE2EDuration="22.143393097s" podCreationTimestamp="2025-02-13 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:34.130205846 +0000 UTC m=+38.205948011" watchObservedRunningTime="2025-02-13 19:10:34.143393097 +0000 UTC m=+38.219135262" Feb 13 19:10:34.489993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426116291.mount: Deactivated successfully. Feb 13 19:10:35.112948 kubelet[2646]: E0213 19:10:35.112865 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:35.112948 kubelet[2646]: E0213 19:10:35.112950 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:38.029384 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Feb 13 19:10:38.077355 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:38.078771 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:38.083001 systemd-logind[1454]: New session 13 of user core. Feb 13 19:10:38.096264 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:10:38.223588 sshd[4128]: Connection closed by 10.0.0.1 port 54460 Feb 13 19:10:38.224197 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:38.227494 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:54460.service: Deactivated successfully. Feb 13 19:10:38.230354 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:10:38.237595 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:10:38.238477 systemd-logind[1454]: Removed session 13. Feb 13 19:10:41.450455 kubelet[2646]: I0213 19:10:41.450369 2646 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:10:41.452927 kubelet[2646]: E0213 19:10:41.451267 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:42.133094 kubelet[2646]: E0213 19:10:42.131837 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:43.234681 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:39424.service - OpenSSH per-connection server daemon (10.0.0.1:39424). Feb 13 19:10:43.274455 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 39424 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:43.275614 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:43.278857 systemd-logind[1454]: New session 14 of user core. Feb 13 19:10:43.285994 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:10:43.400696 sshd[4146]: Connection closed by 10.0.0.1 port 39424 Feb 13 19:10:43.401222 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:43.413173 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:39424.service: Deactivated successfully. Feb 13 19:10:43.414967 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:10:43.418899 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:10:43.421112 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:39426.service - OpenSSH per-connection server daemon (10.0.0.1:39426). Feb 13 19:10:43.422772 systemd-logind[1454]: Removed session 14. Feb 13 19:10:43.463277 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 39426 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:43.464482 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:43.468268 systemd-logind[1454]: New session 15 of user core. Feb 13 19:10:43.478007 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:10:43.718737 sshd[4160]: Connection closed by 10.0.0.1 port 39426 Feb 13 19:10:43.719653 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:43.732216 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:39426.service: Deactivated successfully. Feb 13 19:10:43.734347 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:10:43.737311 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:10:43.745123 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:39440.service - OpenSSH per-connection server daemon (10.0.0.1:39440). Feb 13 19:10:43.746083 systemd-logind[1454]: Removed session 15. Feb 13 19:10:43.790881 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 39440 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:43.792360 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:43.801358 systemd-logind[1454]: New session 16 of user core. Feb 13 19:10:43.806008 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:10:45.127891 sshd[4172]: Connection closed by 10.0.0.1 port 39440 Feb 13 19:10:45.128513 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:45.137970 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:39440.service: Deactivated successfully. Feb 13 19:10:45.140868 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:10:45.148523 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:10:45.152281 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:39450.service - OpenSSH per-connection server daemon (10.0.0.1:39450). Feb 13 19:10:45.154749 systemd-logind[1454]: Removed session 16. Feb 13 19:10:45.195146 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 39450 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:45.196498 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:45.200746 systemd-logind[1454]: New session 17 of user core. Feb 13 19:10:45.210179 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:10:45.448262 sshd[4191]: Connection closed by 10.0.0.1 port 39450 Feb 13 19:10:45.449218 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:45.459373 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:39450.service: Deactivated successfully. Feb 13 19:10:45.463657 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:10:45.465257 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:10:45.475218 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:39462.service - OpenSSH per-connection server daemon (10.0.0.1:39462). Feb 13 19:10:45.476527 systemd-logind[1454]: Removed session 17. Feb 13 19:10:45.517877 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 39462 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:45.521313 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:45.527546 systemd-logind[1454]: New session 18 of user core. Feb 13 19:10:45.543051 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:10:45.659055 sshd[4204]: Connection closed by 10.0.0.1 port 39462 Feb 13 19:10:45.659579 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:45.664243 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:39462.service: Deactivated successfully. Feb 13 19:10:45.666940 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:10:45.668056 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:10:45.668817 systemd-logind[1454]: Removed session 18. Feb 13 19:10:50.706201 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:39468.service - OpenSSH per-connection server daemon (10.0.0.1:39468). Feb 13 19:10:50.720431 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 39468 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:50.721796 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:50.725701 systemd-logind[1454]: New session 19 of user core. Feb 13 19:10:50.741192 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:10:50.860888 sshd[4222]: Connection closed by 10.0.0.1 port 39468 Feb 13 19:10:50.861463 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:50.864406 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:10:50.867087 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:10:50.867283 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:39468.service: Deactivated successfully. Feb 13 19:10:50.869457 systemd-logind[1454]: Removed session 19. Feb 13 19:10:55.878624 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:52078.service - OpenSSH per-connection server daemon (10.0.0.1:52078). Feb 13 19:10:55.922266 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 52078 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:10:55.923444 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:55.927144 systemd-logind[1454]: New session 20 of user core. Feb 13 19:10:55.937018 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:10:56.055982 sshd[4238]: Connection closed by 10.0.0.1 port 52078 Feb 13 19:10:56.056442 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:56.058717 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:52078.service: Deactivated successfully. Feb 13 19:10:56.060287 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:10:56.061769 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:10:56.062463 systemd-logind[1454]: Removed session 20. Feb 13 19:11:01.066441 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:52080.service - OpenSSH per-connection server daemon (10.0.0.1:52080). Feb 13 19:11:01.106867 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 52080 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:11:01.108117 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:01.111590 systemd-logind[1454]: New session 21 of user core. Feb 13 19:11:01.126994 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:11:01.241216 sshd[4254]: Connection closed by 10.0.0.1 port 52080 Feb 13 19:11:01.241719 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:01.248227 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:52080.service: Deactivated successfully. Feb 13 19:11:01.250450 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:11:01.252090 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:11:01.253145 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:52092.service - OpenSSH per-connection server daemon (10.0.0.1:52092). Feb 13 19:11:01.256662 systemd-logind[1454]: Removed session 21. Feb 13 19:11:01.293387 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 52092 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:11:01.294558 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:01.297882 systemd-logind[1454]: New session 22 of user core. Feb 13 19:11:01.308996 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:11:02.929986 containerd[1475]: time="2025-02-13T19:11:02.929942692Z" level=info msg="StopContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" with timeout 30 (s)" Feb 13 19:11:02.931886 containerd[1475]: time="2025-02-13T19:11:02.930829777Z" level=info msg="Stop container \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" with signal terminated" Feb 13 19:11:02.944959 systemd[1]: cri-containerd-9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6.scope: Deactivated successfully. Feb 13 19:11:02.964611 containerd[1475]: time="2025-02-13T19:11:02.964573947Z" level=info msg="StopContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" with timeout 2 (s)" Feb 13 19:11:02.965001 containerd[1475]: time="2025-02-13T19:11:02.964835108Z" level=info msg="Stop container \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" with signal terminated" Feb 13 19:11:02.966143 containerd[1475]: time="2025-02-13T19:11:02.966099474Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:11:02.969159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6-rootfs.mount: Deactivated successfully. Feb 13 19:11:02.971972 systemd-networkd[1398]: lxc_health: Link DOWN Feb 13 19:11:02.971978 systemd-networkd[1398]: lxc_health: Lost carrier Feb 13 19:11:02.981412 containerd[1475]: time="2025-02-13T19:11:02.981331791Z" level=info msg="shim disconnected" id=9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6 namespace=k8s.io Feb 13 19:11:02.981412 containerd[1475]: time="2025-02-13T19:11:02.981386311Z" level=warning msg="cleaning up after shim disconnected" id=9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6 namespace=k8s.io Feb 13 19:11:02.981412 containerd[1475]: time="2025-02-13T19:11:02.981394031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:02.998823 systemd[1]: cri-containerd-8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb.scope: Deactivated successfully. Feb 13 19:11:02.999338 systemd[1]: cri-containerd-8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb.scope: Consumed 6.715s CPU time. Feb 13 19:11:03.016578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb-rootfs.mount: Deactivated successfully. Feb 13 19:11:03.023596 containerd[1475]: time="2025-02-13T19:11:03.023534401Z" level=info msg="shim disconnected" id=8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb namespace=k8s.io Feb 13 19:11:03.023596 containerd[1475]: time="2025-02-13T19:11:03.023590283Z" level=warning msg="cleaning up after shim disconnected" id=8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb namespace=k8s.io Feb 13 19:11:03.023596 containerd[1475]: time="2025-02-13T19:11:03.023598283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:03.033480 containerd[1475]: time="2025-02-13T19:11:03.033439099Z" level=info msg="StopContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" returns successfully" Feb 13 19:11:03.035015 containerd[1475]: time="2025-02-13T19:11:03.034913258Z" level=info msg="StopPodSandbox for \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\"" Feb 13 19:11:03.035015 containerd[1475]: time="2025-02-13T19:11:03.034953219Z" level=info msg="Container to stop \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.037457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184-shm.mount: Deactivated successfully. Feb 13 19:11:03.040130 containerd[1475]: time="2025-02-13T19:11:03.040008151Z" level=info msg="StopContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" returns successfully" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040483483Z" level=info msg="StopPodSandbox for \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\"" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040518084Z" level=info msg="Container to stop \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040528484Z" level=info msg="Container to stop \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040537445Z" level=info msg="Container to stop \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040545925Z" level=info msg="Container to stop \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.040645 containerd[1475]: time="2025-02-13T19:11:03.040555485Z" level=info msg="Container to stop \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:03.042723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b-shm.mount: Deactivated successfully. Feb 13 19:11:03.045769 systemd[1]: cri-containerd-dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184.scope: Deactivated successfully. Feb 13 19:11:03.047127 systemd[1]: cri-containerd-7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b.scope: Deactivated successfully. Feb 13 19:11:03.069690 containerd[1475]: time="2025-02-13T19:11:03.069633164Z" level=info msg="shim disconnected" id=7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b namespace=k8s.io Feb 13 19:11:03.070199 containerd[1475]: time="2025-02-13T19:11:03.069956612Z" level=warning msg="cleaning up after shim disconnected" id=7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b namespace=k8s.io Feb 13 19:11:03.070199 containerd[1475]: time="2025-02-13T19:11:03.069973533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:03.070199 containerd[1475]: time="2025-02-13T19:11:03.069786768Z" level=info msg="shim disconnected" id=dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184 namespace=k8s.io Feb 13 19:11:03.070199 containerd[1475]: time="2025-02-13T19:11:03.070062855Z" level=warning msg="cleaning up after shim disconnected" id=dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184 namespace=k8s.io Feb 13 19:11:03.070199 containerd[1475]: time="2025-02-13T19:11:03.070069735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:03.082940 containerd[1475]: time="2025-02-13T19:11:03.082660144Z" level=info msg="TearDown network for sandbox \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\" successfully" Feb 13 19:11:03.082940 containerd[1475]: time="2025-02-13T19:11:03.082696024Z" level=info msg="StopPodSandbox for \"dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184\" returns successfully" Feb 13 19:11:03.092901 containerd[1475]: time="2025-02-13T19:11:03.092829969Z" level=info msg="TearDown network for sandbox \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" successfully" Feb 13 19:11:03.092901 containerd[1475]: time="2025-02-13T19:11:03.092897091Z" level=info msg="StopPodSandbox for \"7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b\" returns successfully" Feb 13 19:11:03.177545 kubelet[2646]: I0213 19:11:03.177396 2646 scope.go:117] "RemoveContainer" containerID="8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb" Feb 13 19:11:03.179370 containerd[1475]: time="2025-02-13T19:11:03.179330025Z" level=info msg="RemoveContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\"" Feb 13 19:11:03.183195 containerd[1475]: time="2025-02-13T19:11:03.183102364Z" level=info msg="RemoveContainer for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" returns successfully" Feb 13 19:11:03.183561 kubelet[2646]: I0213 19:11:03.183501 2646 scope.go:117] "RemoveContainer" containerID="9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a" Feb 13 19:11:03.184838 containerd[1475]: time="2025-02-13T19:11:03.184805888Z" level=info msg="RemoveContainer for \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\"" Feb 13 19:11:03.187150 containerd[1475]: time="2025-02-13T19:11:03.187124029Z" level=info msg="RemoveContainer for \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\" returns successfully" Feb 13 19:11:03.187350 kubelet[2646]: I0213 19:11:03.187299 2646 scope.go:117] "RemoveContainer" containerID="b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8" Feb 13 19:11:03.188622 containerd[1475]: time="2025-02-13T19:11:03.188591627Z" level=info msg="RemoveContainer for \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\"" Feb 13 19:11:03.190830 containerd[1475]: time="2025-02-13T19:11:03.190805485Z" level=info msg="RemoveContainer for \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\" returns successfully" Feb 13 19:11:03.191076 kubelet[2646]: I0213 19:11:03.191061 2646 scope.go:117] "RemoveContainer" containerID="690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127" Feb 13 19:11:03.192189 containerd[1475]: time="2025-02-13T19:11:03.192164240Z" level=info msg="RemoveContainer for \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\"" Feb 13 19:11:03.194402 containerd[1475]: time="2025-02-13T19:11:03.194374578Z" level=info msg="RemoveContainer for \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\" returns successfully" Feb 13 19:11:03.194612 kubelet[2646]: I0213 19:11:03.194553 2646 scope.go:117] "RemoveContainer" containerID="e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff" Feb 13 19:11:03.195470 kubelet[2646]: I0213 19:11:03.195323 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d109591a-dafb-4354-841a-46d8369060bf-cilium-config-path\") pod \"d109591a-dafb-4354-841a-46d8369060bf\" (UID: \"d109591a-dafb-4354-841a-46d8369060bf\") " Feb 13 19:11:03.195470 kubelet[2646]: I0213 19:11:03.195382 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wsh\" (UniqueName: \"kubernetes.io/projected/d109591a-dafb-4354-841a-46d8369060bf-kube-api-access-k6wsh\") pod \"d109591a-dafb-4354-841a-46d8369060bf\" (UID: \"d109591a-dafb-4354-841a-46d8369060bf\") " Feb 13 19:11:03.195972 containerd[1475]: time="2025-02-13T19:11:03.195687252Z" level=info msg="RemoveContainer for \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\"" Feb 13 19:11:03.198726 containerd[1475]: time="2025-02-13T19:11:03.198680250Z" level=info msg="RemoveContainer for \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\" returns successfully" Feb 13 19:11:03.200014 kubelet[2646]: I0213 19:11:03.199918 2646 scope.go:117] "RemoveContainer" containerID="8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb" Feb 13 19:11:03.200312 containerd[1475]: time="2025-02-13T19:11:03.200211570Z" level=error msg="ContainerStatus for \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\": not found" Feb 13 19:11:03.206315 kubelet[2646]: E0213 19:11:03.206263 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\": not found" containerID="8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb" Feb 13 19:11:03.206424 kubelet[2646]: I0213 19:11:03.206321 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb"} err="failed to get container status \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f1face2a37edab34eb5a41c167343a2ae9d570a170b8b5fd599fa62078981fb\": not found" Feb 13 19:11:03.206455 kubelet[2646]: I0213 19:11:03.206427 2646 scope.go:117] "RemoveContainer" containerID="9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a" Feb 13 19:11:03.206747 containerd[1475]: time="2025-02-13T19:11:03.206667819Z" level=error msg="ContainerStatus for \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\": not found" Feb 13 19:11:03.206830 kubelet[2646]: E0213 19:11:03.206809 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\": not found" containerID="9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a" Feb 13 19:11:03.206904 kubelet[2646]: I0213 19:11:03.206833 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a"} err="failed to get container status \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c02aaad0235eff3a1d5d387dca6da4f0f827b98340851029722b9db8d5be99a\": not found" Feb 13 19:11:03.206904 kubelet[2646]: I0213 19:11:03.206866 2646 scope.go:117] "RemoveContainer" containerID="b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8" Feb 13 19:11:03.207072 containerd[1475]: time="2025-02-13T19:11:03.207041628Z" level=error msg="ContainerStatus for \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\": not found" Feb 13 19:11:03.207189 kubelet[2646]: E0213 19:11:03.207164 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\": not found" containerID="b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8" Feb 13 19:11:03.207217 kubelet[2646]: I0213 19:11:03.207194 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8"} err="failed to get container status \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8cb6c7560862c3922f773b8dbf24da2621236acfe379ad0889f431473b43fd8\": not found" Feb 13 19:11:03.207217 kubelet[2646]: I0213 19:11:03.207211 2646 scope.go:117] "RemoveContainer" containerID="690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127" Feb 13 19:11:03.207433 containerd[1475]: time="2025-02-13T19:11:03.207401718Z" level=error msg="ContainerStatus for \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\": not found" Feb 13 19:11:03.207514 kubelet[2646]: E0213 19:11:03.207497 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\": not found" containerID="690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127" Feb 13 19:11:03.207544 kubelet[2646]: I0213 19:11:03.207517 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127"} err="failed to get container status \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\": rpc error: code = NotFound desc = an error occurred when try to find container \"690335066a862dee7f5e05d05918b296a4d0df3385c446d4073aed16399fa127\": not found" Feb 13 19:11:03.207544 kubelet[2646]: I0213 19:11:03.207531 2646 scope.go:117] "RemoveContainer" containerID="e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff" Feb 13 19:11:03.207758 containerd[1475]: time="2025-02-13T19:11:03.207691365Z" level=error msg="ContainerStatus for \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\": not found" Feb 13 19:11:03.207830 kubelet[2646]: E0213 19:11:03.207805 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\": not found" containerID="e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff" Feb 13 19:11:03.207901 kubelet[2646]: I0213 19:11:03.207833 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff"} err="failed to get container status \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1530f53f05228c28ec94a68a9f93c75ab81f00408e73dc96287a2f33ede3dff\": not found" Feb 13 19:11:03.207901 kubelet[2646]: I0213 19:11:03.207861 2646 scope.go:117] "RemoveContainer" containerID="9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6" Feb 13 19:11:03.208796 kubelet[2646]: I0213 19:11:03.208765 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d109591a-dafb-4354-841a-46d8369060bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d109591a-dafb-4354-841a-46d8369060bf" (UID: "d109591a-dafb-4354-841a-46d8369060bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:11:03.209025 containerd[1475]: time="2025-02-13T19:11:03.208998039Z" level=info msg="RemoveContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\"" Feb 13 19:11:03.209515 kubelet[2646]: I0213 19:11:03.209470 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d109591a-dafb-4354-841a-46d8369060bf-kube-api-access-k6wsh" (OuterVolumeSpecName: "kube-api-access-k6wsh") pod "d109591a-dafb-4354-841a-46d8369060bf" (UID: "d109591a-dafb-4354-841a-46d8369060bf"). InnerVolumeSpecName "kube-api-access-k6wsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:11:03.211491 containerd[1475]: time="2025-02-13T19:11:03.211456264Z" level=info msg="RemoveContainer for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" returns successfully" Feb 13 19:11:03.211651 kubelet[2646]: I0213 19:11:03.211629 2646 scope.go:117] "RemoveContainer" containerID="9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6" Feb 13 19:11:03.211976 containerd[1475]: time="2025-02-13T19:11:03.211935236Z" level=error msg="ContainerStatus for \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\": not found" Feb 13 19:11:03.212075 kubelet[2646]: E0213 19:11:03.212051 2646 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\": not found" containerID="9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6" Feb 13 19:11:03.212120 kubelet[2646]: I0213 19:11:03.212078 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6"} err="failed to get container status \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c8e35515e64fedeb2135e0d76ec5488e974f5e895d3663c04aad24a0705b6d6\": not found" Feb 13 19:11:03.296419 kubelet[2646]: I0213 19:11:03.296381 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hubble-tls\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296419 kubelet[2646]: I0213 19:11:03.296422 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-run\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296440 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hostproc\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296456 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-etc-cni-netd\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296470 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-cgroup\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296491 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-clustermesh-secrets\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296507 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-net\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296578 kubelet[2646]: I0213 19:11:03.296520 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cni-path\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296536 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-kernel\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296557 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-config-path\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296572 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-xtables-lock\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296585 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-bpf-maps\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296601 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdg8d\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-kube-api-access-rdg8d\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296707 kubelet[2646]: I0213 19:11:03.296616 2646 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-lib-modules\") pod \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\" (UID: \"7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a\") " Feb 13 19:11:03.296824 kubelet[2646]: I0213 19:11:03.296649 2646 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k6wsh\" (UniqueName: \"kubernetes.io/projected/d109591a-dafb-4354-841a-46d8369060bf-kube-api-access-k6wsh\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.296824 kubelet[2646]: I0213 19:11:03.296659 2646 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d109591a-dafb-4354-841a-46d8369060bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.296824 kubelet[2646]: I0213 19:11:03.296716 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298580 kubelet[2646]: I0213 19:11:03.297164 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298580 kubelet[2646]: I0213 19:11:03.297169 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298580 kubelet[2646]: I0213 19:11:03.297192 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298580 kubelet[2646]: I0213 19:11:03.297215 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298580 kubelet[2646]: I0213 19:11:03.297269 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298753 kubelet[2646]: I0213 19:11:03.297297 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298753 kubelet[2646]: I0213 19:11:03.297314 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.298753 kubelet[2646]: I0213 19:11:03.297332 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.299068 kubelet[2646]: I0213 19:11:03.299039 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:11:03.299121 kubelet[2646]: I0213 19:11:03.299093 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:11:03.299313 kubelet[2646]: I0213 19:11:03.299283 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:11:03.299672 kubelet[2646]: I0213 19:11:03.299643 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-kube-api-access-rdg8d" (OuterVolumeSpecName: "kube-api-access-rdg8d") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "kube-api-access-rdg8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:11:03.299898 kubelet[2646]: I0213 19:11:03.299867 2646 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" (UID: "7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:11:03.396892 kubelet[2646]: I0213 19:11:03.396830 2646 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.396892 kubelet[2646]: I0213 19:11:03.396886 2646 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.396892 kubelet[2646]: I0213 19:11:03.396899 2646 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.396892 kubelet[2646]: I0213 19:11:03.396910 2646 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396918 2646 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396928 2646 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396937 2646 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396945 2646 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396952 2646 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396960 2646 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396968 2646 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rdg8d\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-kube-api-access-rdg8d\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397132 kubelet[2646]: I0213 19:11:03.396975 2646 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397328 kubelet[2646]: I0213 19:11:03.396983 2646 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.397328 kubelet[2646]: I0213 19:11:03.396990 2646 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:03.478404 systemd[1]: Removed slice kubepods-burstable-pod7ef57c1c_47f5_4f73_ba0b_b1ce9a2e9b4a.slice - libcontainer container kubepods-burstable-pod7ef57c1c_47f5_4f73_ba0b_b1ce9a2e9b4a.slice. Feb 13 19:11:03.478493 systemd[1]: kubepods-burstable-pod7ef57c1c_47f5_4f73_ba0b_b1ce9a2e9b4a.slice: Consumed 6.928s CPU time. Feb 13 19:11:03.484222 systemd[1]: Removed slice kubepods-besteffort-podd109591a_dafb_4354_841a_46d8369060bf.slice - libcontainer container kubepods-besteffort-podd109591a_dafb_4354_841a_46d8369060bf.slice. Feb 13 19:11:03.944106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7781ede5bd8c1bb1e7b123dc4fad8e83ce341e8e09e4d114ad03f29feb030a0b-rootfs.mount: Deactivated successfully. Feb 13 19:11:03.944210 systemd[1]: var-lib-kubelet-pods-7ef57c1c\x2d47f5\x2d4f73\x2dba0b\x2db1ce9a2e9b4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdg8d.mount: Deactivated successfully. Feb 13 19:11:03.944276 systemd[1]: var-lib-kubelet-pods-7ef57c1c\x2d47f5\x2d4f73\x2dba0b\x2db1ce9a2e9b4a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:11:03.944326 systemd[1]: var-lib-kubelet-pods-7ef57c1c\x2d47f5\x2d4f73\x2dba0b\x2db1ce9a2e9b4a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:11:03.944379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfce87160d0dacf2b85f7ca746695d0d0c6961d32e797b5431d5ae9d38e81184-rootfs.mount: Deactivated successfully. Feb 13 19:11:03.944424 systemd[1]: var-lib-kubelet-pods-d109591a\x2ddafb\x2d4354\x2d841a\x2d46d8369060bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6wsh.mount: Deactivated successfully. Feb 13 19:11:04.013371 kubelet[2646]: I0213 19:11:04.012524 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" path="/var/lib/kubelet/pods/7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a/volumes" Feb 13 19:11:04.013371 kubelet[2646]: I0213 19:11:04.013108 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d109591a-dafb-4354-841a-46d8369060bf" path="/var/lib/kubelet/pods/d109591a-dafb-4354-841a-46d8369060bf/volumes" Feb 13 19:11:04.898067 sshd[4269]: Connection closed by 10.0.0.1 port 52092 Feb 13 19:11:04.895381 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:04.904534 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:52092.service: Deactivated successfully. Feb 13 19:11:04.906474 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:11:04.909088 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:11:04.924315 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:55248.service - OpenSSH per-connection server daemon (10.0.0.1:55248). Feb 13 19:11:04.928706 systemd-logind[1454]: Removed session 22. Feb 13 19:11:04.967506 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 55248 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:11:04.968071 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:04.973687 systemd-logind[1454]: New session 23 of user core. Feb 13 19:11:04.983050 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:11:06.051511 kubelet[2646]: E0213 19:11:06.051446 2646 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:11:06.423875 sshd[4432]: Connection closed by 10.0.0.1 port 55248 Feb 13 19:11:06.425071 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:06.432542 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:55248.service: Deactivated successfully. Feb 13 19:11:06.435957 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:11:06.437976 kubelet[2646]: I0213 19:11:06.437029 2646 topology_manager.go:215] "Topology Admit Handler" podUID="ab8af914-8c42-4c0e-860a-baaaa38aa88b" podNamespace="kube-system" podName="cilium-9jdbw" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437100 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="clean-cilium-state" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437109 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d109591a-dafb-4354-841a-46d8369060bf" containerName="cilium-operator" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437116 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="apply-sysctl-overwrites" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437122 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="mount-bpf-fs" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437128 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="cilium-agent" Feb 13 19:11:06.437976 kubelet[2646]: E0213 19:11:06.437135 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="mount-cgroup" Feb 13 19:11:06.437976 kubelet[2646]: I0213 19:11:06.437176 2646 memory_manager.go:354] "RemoveStaleState removing state" podUID="d109591a-dafb-4354-841a-46d8369060bf" containerName="cilium-operator" Feb 13 19:11:06.437976 kubelet[2646]: I0213 19:11:06.437184 2646 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef57c1c-47f5-4f73-ba0b-b1ce9a2e9b4a" containerName="cilium-agent" Feb 13 19:11:06.436117 systemd[1]: session-23.scope: Consumed 1.331s CPU time. Feb 13 19:11:06.440931 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:11:06.448289 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:55262.service - OpenSSH per-connection server daemon (10.0.0.1:55262). Feb 13 19:11:06.451920 systemd-logind[1454]: Removed session 23. Feb 13 19:11:06.471540 systemd[1]: Created slice kubepods-burstable-podab8af914_8c42_4c0e_860a_baaaa38aa88b.slice - libcontainer container kubepods-burstable-podab8af914_8c42_4c0e_860a_baaaa38aa88b.slice. Feb 13 19:11:06.507269 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 55262 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:11:06.508721 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:06.515987 systemd-logind[1454]: New session 24 of user core. Feb 13 19:11:06.523026 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:11:06.572456 sshd[4445]: Connection closed by 10.0.0.1 port 55262 Feb 13 19:11:06.573033 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:06.584415 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:55262.service: Deactivated successfully. Feb 13 19:11:06.586148 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:11:06.587944 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:11:06.589507 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:55270.service - OpenSSH per-connection server daemon (10.0.0.1:55270). Feb 13 19:11:06.590924 systemd-logind[1454]: Removed session 24. Feb 13 19:11:06.612148 kubelet[2646]: I0213 19:11:06.612112 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-lib-modules\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.612233 kubelet[2646]: I0213 19:11:06.612152 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-xtables-lock\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.612233 kubelet[2646]: I0213 19:11:06.612173 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-host-proc-sys-kernel\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.612233 kubelet[2646]: I0213 19:11:06.612189 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab8af914-8c42-4c0e-860a-baaaa38aa88b-hubble-tls\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.612233 kubelet[2646]: I0213 19:11:06.612209 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-cilium-cgroup\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613099 kubelet[2646]: I0213 19:11:06.612782 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs76w\" (UniqueName: \"kubernetes.io/projected/ab8af914-8c42-4c0e-860a-baaaa38aa88b-kube-api-access-cs76w\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613099 kubelet[2646]: I0213 19:11:06.612838 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-hostproc\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613099 kubelet[2646]: I0213 19:11:06.612870 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-etc-cni-netd\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613099 kubelet[2646]: I0213 19:11:06.612889 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab8af914-8c42-4c0e-860a-baaaa38aa88b-clustermesh-secrets\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613099 kubelet[2646]: I0213 19:11:06.612914 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab8af914-8c42-4c0e-860a-baaaa38aa88b-cilium-ipsec-secrets\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613250 kubelet[2646]: I0213 19:11:06.612934 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-host-proc-sys-net\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613250 kubelet[2646]: I0213 19:11:06.612953 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab8af914-8c42-4c0e-860a-baaaa38aa88b-cilium-config-path\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613250 kubelet[2646]: I0213 19:11:06.612971 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-cilium-run\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613250 kubelet[2646]: I0213 19:11:06.612986 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-bpf-maps\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.613250 kubelet[2646]: I0213 19:11:06.613004 2646 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab8af914-8c42-4c0e-860a-baaaa38aa88b-cni-path\") pod \"cilium-9jdbw\" (UID: \"ab8af914-8c42-4c0e-860a-baaaa38aa88b\") " pod="kube-system/cilium-9jdbw" Feb 13 19:11:06.631662 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 55270 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:11:06.632837 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:06.636694 systemd-logind[1454]: New session 25 of user core. Feb 13 19:11:06.643023 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:11:06.776095 kubelet[2646]: E0213 19:11:06.775979 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:06.777581 containerd[1475]: time="2025-02-13T19:11:06.777371325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jdbw,Uid:ab8af914-8c42-4c0e-860a-baaaa38aa88b,Namespace:kube-system,Attempt:0,}" Feb 13 19:11:06.801102 containerd[1475]: time="2025-02-13T19:11:06.800992176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:11:06.803249 containerd[1475]: time="2025-02-13T19:11:06.801057417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:11:06.803340 containerd[1475]: time="2025-02-13T19:11:06.803245870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:11:06.803751 containerd[1475]: time="2025-02-13T19:11:06.803703841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:11:06.822023 systemd[1]: Started cri-containerd-de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52.scope - libcontainer container de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52. Feb 13 19:11:06.856408 containerd[1475]: time="2025-02-13T19:11:06.856331832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jdbw,Uid:ab8af914-8c42-4c0e-860a-baaaa38aa88b,Namespace:kube-system,Attempt:0,} returns sandbox id \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\"" Feb 13 19:11:06.857206 kubelet[2646]: E0213 19:11:06.857182 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:06.860084 containerd[1475]: time="2025-02-13T19:11:06.860048122Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:11:06.871874 containerd[1475]: time="2025-02-13T19:11:06.871803446Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f\"" Feb 13 19:11:06.872485 containerd[1475]: time="2025-02-13T19:11:06.872461622Z" level=info msg="StartContainer for \"8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f\"" Feb 13 19:11:06.894026 systemd[1]: Started cri-containerd-8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f.scope - libcontainer container 8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f. Feb 13 19:11:06.917069 containerd[1475]: time="2025-02-13T19:11:06.917009177Z" level=info msg="StartContainer for \"8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f\" returns successfully" Feb 13 19:11:06.938372 systemd[1]: cri-containerd-8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f.scope: Deactivated successfully. Feb 13 19:11:06.970995 containerd[1475]: time="2025-02-13T19:11:06.970928399Z" level=info msg="shim disconnected" id=8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f namespace=k8s.io Feb 13 19:11:06.970995 containerd[1475]: time="2025-02-13T19:11:06.970986441Z" level=warning msg="cleaning up after shim disconnected" id=8237a539fabf79e22870b7deb1d7549d7c948f1c0c63d2817f7f4f7836a0933f namespace=k8s.io Feb 13 19:11:06.970995 containerd[1475]: time="2025-02-13T19:11:06.970997401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:07.188155 kubelet[2646]: E0213 19:11:07.187549 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:07.191864 containerd[1475]: time="2025-02-13T19:11:07.190897796Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:11:07.205540 containerd[1475]: time="2025-02-13T19:11:07.205491900Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040\"" Feb 13 19:11:07.206291 containerd[1475]: time="2025-02-13T19:11:07.206252758Z" level=info msg="StartContainer for \"1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040\"" Feb 13 19:11:07.236063 systemd[1]: Started cri-containerd-1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040.scope - libcontainer container 1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040. Feb 13 19:11:07.260534 containerd[1475]: time="2025-02-13T19:11:07.260470034Z" level=info msg="StartContainer for \"1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040\" returns successfully" Feb 13 19:11:07.270244 systemd[1]: cri-containerd-1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040.scope: Deactivated successfully. Feb 13 19:11:07.297533 containerd[1475]: time="2025-02-13T19:11:07.297455385Z" level=info msg="shim disconnected" id=1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040 namespace=k8s.io Feb 13 19:11:07.297533 containerd[1475]: time="2025-02-13T19:11:07.297510866Z" level=warning msg="cleaning up after shim disconnected" id=1556740af9e54029840d4d2a7941f41d1839f32c2d207a81367e7fadddfbc040 namespace=k8s.io Feb 13 19:11:07.297533 containerd[1475]: time="2025-02-13T19:11:07.297520146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:08.011072 kubelet[2646]: I0213 19:11:08.010799 2646 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:11:08Z","lastTransitionTime":"2025-02-13T19:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:11:08.191473 kubelet[2646]: E0213 19:11:08.191363 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:08.193824 containerd[1475]: time="2025-02-13T19:11:08.193772294Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:11:08.210004 containerd[1475]: time="2025-02-13T19:11:08.209958065Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303\"" Feb 13 19:11:08.210660 containerd[1475]: time="2025-02-13T19:11:08.210616960Z" level=info msg="StartContainer for \"0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303\"" Feb 13 19:11:08.238158 systemd[1]: Started cri-containerd-0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303.scope - libcontainer container 0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303. Feb 13 19:11:08.262742 containerd[1475]: time="2025-02-13T19:11:08.262641034Z" level=info msg="StartContainer for \"0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303\" returns successfully" Feb 13 19:11:08.262902 systemd[1]: cri-containerd-0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303.scope: Deactivated successfully. Feb 13 19:11:08.300525 containerd[1475]: time="2025-02-13T19:11:08.300465303Z" level=info msg="shim disconnected" id=0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303 namespace=k8s.io Feb 13 19:11:08.300525 containerd[1475]: time="2025-02-13T19:11:08.300519864Z" level=warning msg="cleaning up after shim disconnected" id=0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303 namespace=k8s.io Feb 13 19:11:08.300819 containerd[1475]: time="2025-02-13T19:11:08.300612386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:08.719021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0344b91d95117a2aefd9027dc8d94b570f66834f50b2a340e0693e95b5606303-rootfs.mount: Deactivated successfully. Feb 13 19:11:09.194580 kubelet[2646]: E0213 19:11:09.194551 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:09.201047 containerd[1475]: time="2025-02-13T19:11:09.200793657Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:11:09.211683 containerd[1475]: time="2025-02-13T19:11:09.211638980Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686\"" Feb 13 19:11:09.212154 containerd[1475]: time="2025-02-13T19:11:09.212117471Z" level=info msg="StartContainer for \"c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686\"" Feb 13 19:11:09.246009 systemd[1]: Started cri-containerd-c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686.scope - libcontainer container c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686. Feb 13 19:11:09.263752 systemd[1]: cri-containerd-c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686.scope: Deactivated successfully. Feb 13 19:11:09.268592 containerd[1475]: time="2025-02-13T19:11:09.268423651Z" level=info msg="StartContainer for \"c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686\" returns successfully" Feb 13 19:11:09.285008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686-rootfs.mount: Deactivated successfully. Feb 13 19:11:09.291123 containerd[1475]: time="2025-02-13T19:11:09.291060318Z" level=info msg="shim disconnected" id=c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686 namespace=k8s.io Feb 13 19:11:09.291123 containerd[1475]: time="2025-02-13T19:11:09.291121359Z" level=warning msg="cleaning up after shim disconnected" id=c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686 namespace=k8s.io Feb 13 19:11:09.291302 containerd[1475]: time="2025-02-13T19:11:09.291130559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:09.292227 containerd[1475]: time="2025-02-13T19:11:09.279058969Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8af914_8c42_4c0e_860a_baaaa38aa88b.slice/cri-containerd-c0e1eaa903ae57b6c21b714fc1b3ede7276e77654ddb1d7be7321690d1721686.scope/memory.events\": no such file or directory" Feb 13 19:11:10.198499 kubelet[2646]: E0213 19:11:10.198438 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:10.202519 containerd[1475]: time="2025-02-13T19:11:10.202334048Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:11:10.215082 containerd[1475]: time="2025-02-13T19:11:10.215012365Z" level=info msg="CreateContainer within sandbox \"de9defe9bf6ce88bdd788a0a2bdfa9f96065844e7745b922733596d874cd9f52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8\"" Feb 13 19:11:10.216201 containerd[1475]: time="2025-02-13T19:11:10.216135909Z" level=info msg="StartContainer for \"6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8\"" Feb 13 19:11:10.237705 systemd[1]: run-containerd-runc-k8s.io-6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8-runc.nDJRE6.mount: Deactivated successfully. Feb 13 19:11:10.247043 systemd[1]: Started cri-containerd-6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8.scope - libcontainer container 6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8. Feb 13 19:11:10.271513 containerd[1475]: time="2025-02-13T19:11:10.271448037Z" level=info msg="StartContainer for \"6d83fadddaefd4f9c7b3bcc730854862fd926315be6734c4def4b8c4afa8cef8\" returns successfully" Feb 13 19:11:10.585876 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:11:11.205782 kubelet[2646]: E0213 19:11:11.203298 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:11.223235 kubelet[2646]: I0213 19:11:11.223007 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9jdbw" podStartSLOduration=5.222991136 podStartE2EDuration="5.222991136s" podCreationTimestamp="2025-02-13 19:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:11:11.222795332 +0000 UTC m=+75.298537497" watchObservedRunningTime="2025-02-13 19:11:11.222991136 +0000 UTC m=+75.298733261" Feb 13 19:11:12.777265 kubelet[2646]: E0213 19:11:12.777219 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:13.011078 kubelet[2646]: E0213 19:11:13.011047 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:13.437272 systemd-networkd[1398]: lxc_health: Link UP Feb 13 19:11:13.448553 systemd-networkd[1398]: lxc_health: Gained carrier Feb 13 19:11:14.012032 kubelet[2646]: E0213 19:11:14.011612 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:14.782228 kubelet[2646]: E0213 19:11:14.780633 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:15.211456 kubelet[2646]: E0213 19:11:15.211407 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:15.219021 systemd-networkd[1398]: lxc_health: Gained IPv6LL Feb 13 19:11:16.213289 kubelet[2646]: E0213 19:11:16.213091 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:19.010626 kubelet[2646]: E0213 19:11:19.010582 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:19.410533 sshd[4453]: Connection closed by 10.0.0.1 port 55270 Feb 13 19:11:19.411688 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:19.415901 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:55270.service: Deactivated successfully. Feb 13 19:11:19.418343 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:11:19.421501 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:11:19.422510 systemd-logind[1454]: Removed session 25.