Feb 13 19:21:13.895259 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:21:13.895280 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:21:13.895289 kernel: KASLR enabled Feb 13 19:21:13.895295 kernel: efi: EFI v2.7 by EDK II Feb 13 19:21:13.895301 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:21:13.895306 kernel: random: crng init done Feb 13 19:21:13.895313 kernel: secureboot: Secure boot disabled Feb 13 19:21:13.895319 kernel: ACPI: Early table checksum verification disabled Feb 13 19:21:13.895325 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:21:13.895332 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:21:13.895338 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895344 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895350 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895356 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895363 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895371 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895377 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895383 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895390 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:21:13.895396 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:21:13.895402 kernel: NUMA: Failed to initialise from firmware Feb 13 19:21:13.895408 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:21:13.895415 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:21:13.895421 kernel: Zone ranges: Feb 13 19:21:13.895427 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:21:13.895435 kernel: DMA32 empty Feb 13 19:21:13.895441 kernel: Normal empty Feb 13 19:21:13.895447 kernel: Movable zone start for each node Feb 13 19:21:13.895453 kernel: Early memory node ranges Feb 13 19:21:13.895459 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:21:13.895466 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:21:13.895472 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:21:13.895478 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:21:13.895485 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:21:13.895491 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:21:13.895498 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:21:13.895504 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:21:13.895512 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:21:13.895519 kernel: psci: probing for conduit method from ACPI. Feb 13 19:21:13.895525 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:21:13.895534 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:21:13.895540 kernel: psci: Trusted OS migration not required Feb 13 19:21:13.895547 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:21:13.895555 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:21:13.895561 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:21:13.895568 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:21:13.895574 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:21:13.895581 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:21:13.895588 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:21:13.895595 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:21:13.895601 kernel: CPU features: detected: Spectre-v4 Feb 13 19:21:13.895608 kernel: CPU features: detected: Spectre-BHB Feb 13 19:21:13.895615 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:21:13.895623 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:21:13.895630 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:21:13.895636 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:21:13.895643 kernel: alternatives: applying boot alternatives Feb 13 19:21:13.895650 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:21:13.895657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:21:13.895664 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:21:13.895671 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:21:13.895677 kernel: Fallback order for Node 0: 0 Feb 13 19:21:13.895683 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:21:13.895690 kernel: Policy zone: DMA Feb 13 19:21:13.895698 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:21:13.895705 kernel: software IO TLB: area num 4. Feb 13 19:21:13.895711 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:21:13.895718 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Feb 13 19:21:13.895725 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:21:13.895731 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:21:13.895738 kernel: rcu: RCU event tracing is enabled. Feb 13 19:21:13.895745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:21:13.895752 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:21:13.895758 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:21:13.895765 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:21:13.895771 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:21:13.895779 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:21:13.895785 kernel: GICv3: 256 SPIs implemented Feb 13 19:21:13.895792 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:21:13.895798 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:21:13.895805 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:21:13.895811 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:21:13.895818 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:21:13.895824 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:21:13.895831 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:21:13.895838 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:21:13.895844 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:21:13.895959 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:21:13.895971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:21:13.895978 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:21:13.895984 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:21:13.895991 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:21:13.895998 kernel: arm-pv: using stolen time PV Feb 13 19:21:13.896012 kernel: Console: colour dummy device 80x25 Feb 13 19:21:13.896020 kernel: ACPI: Core revision 20230628 Feb 13 19:21:13.896027 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:21:13.896034 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:21:13.896045 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:21:13.896052 kernel: landlock: Up and running. Feb 13 19:21:13.896058 kernel: SELinux: Initializing. Feb 13 19:21:13.896065 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:21:13.896072 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:21:13.896079 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:21:13.896086 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:21:13.896092 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:21:13.896099 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:21:13.896107 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:21:13.896114 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:21:13.896121 kernel: Remapping and enabling EFI services. Feb 13 19:21:13.896127 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:21:13.896134 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:21:13.896141 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:21:13.896147 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:21:13.896154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:21:13.896161 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:21:13.896168 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:21:13.896176 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:21:13.896183 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:21:13.896195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:21:13.896205 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:21:13.896212 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:21:13.896219 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:21:13.896226 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:21:13.896233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:21:13.896240 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:21:13.896249 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:21:13.896256 kernel: SMP: Total of 4 processors activated. Feb 13 19:21:13.896263 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:21:13.896270 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:21:13.896277 kernel: CPU features: detected: Common not Private translations Feb 13 19:21:13.896284 kernel: CPU features: detected: CRC32 instructions Feb 13 19:21:13.896291 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:21:13.896299 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:21:13.896308 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:21:13.896315 kernel: CPU features: detected: Privileged Access Never Feb 13 19:21:13.896322 kernel: CPU features: detected: RAS Extension Support Feb 13 19:21:13.896330 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:21:13.896337 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:21:13.896344 kernel: alternatives: applying system-wide alternatives Feb 13 19:21:13.896350 kernel: devtmpfs: initialized Feb 13 19:21:13.896358 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:21:13.896365 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:21:13.896373 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:21:13.896380 kernel: SMBIOS 3.0.0 present. Feb 13 19:21:13.896388 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:21:13.896395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:21:13.896402 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:21:13.896410 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:21:13.896417 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:21:13.896424 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:21:13.896431 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:21:13.896440 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:21:13.896447 kernel: cpuidle: using governor menu Feb 13 19:21:13.896454 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:21:13.896461 kernel: ASID allocator initialised with 32768 entries Feb 13 19:21:13.896469 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:21:13.896476 kernel: Serial: AMBA PL011 UART driver Feb 13 19:21:13.896483 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:21:13.896490 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:21:13.896497 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:21:13.896506 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:21:13.896513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:21:13.896521 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:21:13.896528 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:21:13.896535 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:21:13.896555 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:21:13.896562 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:21:13.896569 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:21:13.896576 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:21:13.896585 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:21:13.896593 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:21:13.896600 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:21:13.896608 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:21:13.896615 kernel: ACPI: Interpreter enabled Feb 13 19:21:13.896622 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:21:13.896642 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:21:13.896650 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:21:13.896658 kernel: printk: console [ttyAMA0] enabled Feb 13 19:21:13.896667 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:21:13.896820 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:21:13.896893 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:21:13.896972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:21:13.897047 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:21:13.897113 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:21:13.897123 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:21:13.897133 kernel: PCI host bridge to bus 0000:00 Feb 13 19:21:13.897204 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:21:13.897263 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:21:13.897322 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:21:13.897378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:21:13.897458 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:21:13.897531 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:21:13.897617 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:21:13.897683 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:21:13.897748 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:21:13.897813 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:21:13.897878 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:21:13.897956 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:21:13.898025 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:21:13.898088 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:21:13.898146 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:21:13.898155 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:21:13.898163 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:21:13.898170 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:21:13.898177 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:21:13.898185 kernel: iommu: Default domain type: Translated Feb 13 19:21:13.898192 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:21:13.898201 kernel: efivars: Registered efivars operations Feb 13 19:21:13.898208 kernel: vgaarb: loaded Feb 13 19:21:13.898216 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:21:13.898223 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:21:13.898231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:21:13.898238 kernel: pnp: PnP ACPI init Feb 13 19:21:13.898330 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:21:13.898341 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:21:13.898351 kernel: NET: Registered PF_INET protocol family Feb 13 19:21:13.898358 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:21:13.898366 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:21:13.898373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:21:13.898381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:21:13.898388 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:21:13.898395 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:21:13.898402 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:21:13.898410 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:21:13.898418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:21:13.898426 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:21:13.898433 kernel: kvm [1]: HYP mode not available Feb 13 19:21:13.898440 kernel: Initialise system trusted keyrings Feb 13 19:21:13.898447 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:21:13.898456 kernel: Key type asymmetric registered Feb 13 19:21:13.898470 kernel: Asymmetric key parser 'x509' registered Feb 13 19:21:13.898478 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:21:13.898486 kernel: io scheduler mq-deadline registered Feb 13 19:21:13.898494 kernel: io scheduler kyber registered Feb 13 19:21:13.898501 kernel: io scheduler bfq registered Feb 13 19:21:13.898508 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:21:13.898515 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:21:13.898523 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:21:13.898588 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:21:13.898597 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:21:13.898605 kernel: thunder_xcv, ver 1.0 Feb 13 19:21:13.898612 kernel: thunder_bgx, ver 1.0 Feb 13 19:21:13.898621 kernel: nicpf, ver 1.0 Feb 13 19:21:13.898628 kernel: nicvf, ver 1.0 Feb 13 19:21:13.898699 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:21:13.898761 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:21:13 UTC (1739474473) Feb 13 19:21:13.898771 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:21:13.898778 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:21:13.898786 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:21:13.898793 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:21:13.898803 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:21:13.898810 kernel: Segment Routing with IPv6 Feb 13 19:21:13.898817 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:21:13.898824 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:21:13.898831 kernel: Key type dns_resolver registered Feb 13 19:21:13.898838 kernel: registered taskstats version 1 Feb 13 19:21:13.898845 kernel: Loading compiled-in X.509 certificates Feb 13 19:21:13.898853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:21:13.898860 kernel: Key type .fscrypt registered Feb 13 19:21:13.898869 kernel: Key type fscrypt-provisioning registered Feb 13 19:21:13.898876 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:21:13.898883 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:21:13.898890 kernel: ima: No architecture policies found Feb 13 19:21:13.898897 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:21:13.898904 kernel: clk: Disabling unused clocks Feb 13 19:21:13.898924 kernel: Freeing unused kernel memory: 39680K Feb 13 19:21:13.898931 kernel: Run /init as init process Feb 13 19:21:13.898938 kernel: with arguments: Feb 13 19:21:13.898948 kernel: /init Feb 13 19:21:13.898955 kernel: with environment: Feb 13 19:21:13.898962 kernel: HOME=/ Feb 13 19:21:13.898969 kernel: TERM=linux Feb 13 19:21:13.898975 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:21:13.898985 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:21:13.898994 systemd[1]: Detected virtualization kvm. Feb 13 19:21:13.899002 systemd[1]: Detected architecture arm64. Feb 13 19:21:13.899018 systemd[1]: Running in initrd. Feb 13 19:21:13.899025 systemd[1]: No hostname configured, using default hostname. Feb 13 19:21:13.899033 systemd[1]: Hostname set to . Feb 13 19:21:13.899041 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:21:13.899048 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:21:13.899056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:21:13.899064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:21:13.899073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:21:13.899083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:21:13.899090 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:21:13.899098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:21:13.899108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:21:13.899116 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:21:13.899123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:21:13.899133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:21:13.899140 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:21:13.899148 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:21:13.899156 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:21:13.899164 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:21:13.899171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:21:13.899179 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:21:13.899187 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:21:13.899194 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:21:13.899208 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:21:13.899216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:21:13.899224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:21:13.899232 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:21:13.899239 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:21:13.899247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:21:13.899254 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:21:13.899262 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:21:13.899269 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:21:13.899279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:21:13.899286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:21:13.899294 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:21:13.899302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:21:13.899310 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:21:13.899318 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:21:13.899328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:21:13.899357 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:21:13.899378 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:21:13.899388 systemd-journald[239]: Journal started Feb 13 19:21:13.899406 systemd-journald[239]: Runtime Journal (/run/log/journal/2c973380005c46f39b2901d9435a7cf1) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:21:13.891202 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:21:13.902423 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:21:13.902977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:21:13.906356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:21:13.909800 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:21:13.909084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:21:13.912045 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:21:13.913462 kernel: Bridge firewalling registered Feb 13 19:21:13.913086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:21:13.916877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:21:13.917817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:21:13.919873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:21:13.924143 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:21:13.928963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:21:13.930124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:21:13.932990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:21:13.938973 dracut-cmdline[273]: dracut-dracut-053 Feb 13 19:21:13.941529 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:21:13.961081 systemd-resolved[281]: Positive Trust Anchors: Feb 13 19:21:13.961160 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:21:13.961192 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:21:13.965982 systemd-resolved[281]: Defaulting to hostname 'linux'. Feb 13 19:21:13.967411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:21:13.968330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:21:14.007937 kernel: SCSI subsystem initialized Feb 13 19:21:14.011935 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:21:14.019940 kernel: iscsi: registered transport (tcp) Feb 13 19:21:14.036941 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:21:14.036960 kernel: QLogic iSCSI HBA Driver Feb 13 19:21:14.086982 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:21:14.096107 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:21:14.119566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:21:14.119650 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:21:14.119683 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:21:14.167958 kernel: raid6: neonx8 gen() 15747 MB/s Feb 13 19:21:14.184941 kernel: raid6: neonx4 gen() 15637 MB/s Feb 13 19:21:14.201962 kernel: raid6: neonx2 gen() 13173 MB/s Feb 13 19:21:14.218942 kernel: raid6: neonx1 gen() 10480 MB/s Feb 13 19:21:14.235958 kernel: raid6: int64x8 gen() 6949 MB/s Feb 13 19:21:14.252940 kernel: raid6: int64x4 gen() 7346 MB/s Feb 13 19:21:14.269940 kernel: raid6: int64x2 gen() 6123 MB/s Feb 13 19:21:14.286943 kernel: raid6: int64x1 gen() 5050 MB/s Feb 13 19:21:14.286980 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s Feb 13 19:21:14.303931 kernel: raid6: .... xor() 11919 MB/s, rmw enabled Feb 13 19:21:14.303954 kernel: raid6: using neon recovery algorithm Feb 13 19:21:14.309348 kernel: xor: measuring software checksum speed Feb 13 19:21:14.309365 kernel: 8regs : 19816 MB/sec Feb 13 19:21:14.309923 kernel: 32regs : 19660 MB/sec Feb 13 19:21:14.310925 kernel: arm64_neon : 23976 MB/sec Feb 13 19:21:14.310939 kernel: xor: using function: arm64_neon (23976 MB/sec) Feb 13 19:21:14.362970 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:21:14.376958 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:21:14.389113 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:21:14.400752 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 19:21:14.404014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:21:14.415152 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:21:14.427069 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Feb 13 19:21:14.455841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:21:14.469197 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:21:14.509878 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:21:14.518282 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:21:14.533706 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:21:14.535292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:21:14.536518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:21:14.538368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:21:14.547151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:21:14.555629 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:21:14.573124 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:21:14.573231 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:21:14.573242 kernel: GPT:9289727 != 19775487 Feb 13 19:21:14.573252 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:21:14.573261 kernel: GPT:9289727 != 19775487 Feb 13 19:21:14.573269 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:21:14.573287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:21:14.558839 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:21:14.574213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:21:14.574285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:21:14.576196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:21:14.578727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:21:14.578813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:21:14.580946 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:21:14.589925 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (522) Feb 13 19:21:14.589970 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Feb 13 19:21:14.594126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:21:14.604964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:21:14.609604 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:21:14.613955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:21:14.620798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:21:14.624393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:21:14.625291 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:21:14.641161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:21:14.642777 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:21:14.649871 disk-uuid[550]: Primary Header is updated. Feb 13 19:21:14.649871 disk-uuid[550]: Secondary Entries is updated. Feb 13 19:21:14.649871 disk-uuid[550]: Secondary Header is updated. Feb 13 19:21:14.654935 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:21:14.664843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:21:15.666930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:21:15.666988 disk-uuid[553]: The operation has completed successfully. Feb 13 19:21:15.688516 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:21:15.688615 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:21:15.710144 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:21:15.713107 sh[574]: Success Feb 13 19:21:15.726936 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:21:15.757301 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:21:15.777398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:21:15.779384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:21:15.789668 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:21:15.789727 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:21:15.789738 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:21:15.790373 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:21:15.790418 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:21:15.794209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:21:15.795376 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:21:15.802096 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:21:15.803520 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:21:15.811388 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:21:15.811445 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:21:15.811455 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:21:15.813927 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:21:15.821820 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:21:15.823935 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:21:15.831601 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:21:15.844182 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:21:15.899783 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:21:15.910122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:21:15.934596 systemd-networkd[764]: lo: Link UP Feb 13 19:21:15.934610 systemd-networkd[764]: lo: Gained carrier Feb 13 19:21:15.935142 ignition[668]: Ignition 2.20.0 Feb 13 19:21:15.935466 systemd-networkd[764]: Enumeration completed Feb 13 19:21:15.935148 ignition[668]: Stage: fetch-offline Feb 13 19:21:15.935577 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:21:15.935179 ignition[668]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:15.935922 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:21:15.935187 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:15.935926 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:21:15.935333 ignition[668]: parsed url from cmdline: "" Feb 13 19:21:15.937454 systemd-networkd[764]: eth0: Link UP Feb 13 19:21:15.935336 ignition[668]: no config URL provided Feb 13 19:21:15.937458 systemd-networkd[764]: eth0: Gained carrier Feb 13 19:21:15.935340 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:21:15.937466 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:21:15.935347 ignition[668]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:21:15.937482 systemd[1]: Reached target network.target - Network. Feb 13 19:21:15.935371 ignition[668]: op(1): [started] loading QEMU firmware config module Feb 13 19:21:15.935376 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:21:15.942600 ignition[668]: op(1): [finished] loading QEMU firmware config module Feb 13 19:21:15.953970 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:21:15.986528 ignition[668]: parsing config with SHA512: ab1b1b34088db979f9748a7dcede094e7ae597ed9e70f7bdc646ea2a8df8ec4683cd6c24018b3aee5e258081499847f9dda3dcd1e6204ba5612822c12124b656 Feb 13 19:21:15.991469 unknown[668]: fetched base config from "system" Feb 13 19:21:15.991478 unknown[668]: fetched user config from "qemu" Feb 13 19:21:15.992004 ignition[668]: fetch-offline: fetch-offline passed Feb 13 19:21:15.993479 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:21:15.992096 ignition[668]: Ignition finished successfully Feb 13 19:21:15.994605 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:21:16.004130 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:21:16.014110 ignition[771]: Ignition 2.20.0 Feb 13 19:21:16.014119 ignition[771]: Stage: kargs Feb 13 19:21:16.014270 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:16.014279 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:16.015196 ignition[771]: kargs: kargs passed Feb 13 19:21:16.015240 ignition[771]: Ignition finished successfully Feb 13 19:21:16.018730 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:21:16.031270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:21:16.040137 ignition[780]: Ignition 2.20.0 Feb 13 19:21:16.040148 ignition[780]: Stage: disks Feb 13 19:21:16.040302 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:16.040311 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:16.041185 ignition[780]: disks: disks passed Feb 13 19:21:16.042947 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:21:16.041229 ignition[780]: Ignition finished successfully Feb 13 19:21:16.043830 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:21:16.045060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:21:16.046324 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:21:16.047617 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:21:16.049048 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:21:16.060101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:21:16.070135 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:21:16.073497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:21:16.086049 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:21:16.131942 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:21:16.132118 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:21:16.133164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:21:16.145981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:21:16.147474 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:21:16.148545 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:21:16.148598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:21:16.153754 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Feb 13 19:21:16.153775 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:21:16.148620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:21:16.156653 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:21:16.156710 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:21:16.154047 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:21:16.158541 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:21:16.160709 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:21:16.161803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:21:16.202516 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:21:16.206406 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:21:16.209925 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:21:16.213358 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:21:16.281637 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:21:16.292013 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:21:16.294329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:21:16.298924 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:21:16.314515 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:21:16.316642 ignition[911]: INFO : Ignition 2.20.0 Feb 13 19:21:16.316642 ignition[911]: INFO : Stage: mount Feb 13 19:21:16.316642 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:16.316642 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:16.316642 ignition[911]: INFO : mount: mount passed Feb 13 19:21:16.316642 ignition[911]: INFO : Ignition finished successfully Feb 13 19:21:16.317277 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:21:16.322015 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:21:16.788410 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:21:16.797140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:21:16.803332 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Feb 13 19:21:16.803371 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:21:16.803391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:21:16.803974 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:21:16.806924 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:21:16.808129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:21:16.824274 ignition[942]: INFO : Ignition 2.20.0 Feb 13 19:21:16.824274 ignition[942]: INFO : Stage: files Feb 13 19:21:16.825563 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:16.825563 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:16.825563 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:21:16.828242 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:21:16.828242 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:21:16.830191 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:21:16.830191 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:21:16.830191 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:21:16.830191 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:21:16.830191 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:21:16.829079 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 19:21:17.426556 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:21:17.795186 systemd-networkd[764]: eth0: Gained IPv6LL Feb 13 19:21:17.881160 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:21:17.882891 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:21:17.882891 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:21:18.161187 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:21:18.236002 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:21:18.237509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:21:18.477339 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:21:18.699073 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:21:18.699073 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:21:18.701893 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:21:18.725539 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:21:18.729480 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:21:18.731734 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:21:18.731734 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:21:18.731734 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:21:18.731734 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:21:18.731734 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:21:18.731734 ignition[942]: INFO : files: files passed Feb 13 19:21:18.731734 ignition[942]: INFO : Ignition finished successfully Feb 13 19:21:18.732208 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:21:18.745100 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:21:18.748121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:21:18.751827 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:21:18.752693 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:21:18.756825 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:21:18.759860 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:21:18.759860 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:21:18.762189 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:21:18.761813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:21:18.763763 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:21:18.774105 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:21:18.795995 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:21:18.796135 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:21:18.797812 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:21:18.799157 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:21:18.800519 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:21:18.801374 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:21:18.816940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:21:18.830127 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:21:18.838393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:21:18.839373 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:21:18.840860 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:21:18.842217 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:21:18.842348 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:21:18.844190 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:21:18.845664 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:21:18.846893 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:21:18.848203 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:21:18.849671 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:21:18.851111 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:21:18.852455 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:21:18.853858 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:21:18.855343 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:21:18.856641 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:21:18.857752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:21:18.857885 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:21:18.859593 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:21:18.860990 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:21:18.862405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:21:18.862508 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:21:18.863901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:21:18.864046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:21:18.866124 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:21:18.866244 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:21:18.867660 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:21:18.868815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:21:18.871957 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:21:18.873866 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:21:18.874639 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:21:18.875780 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:21:18.875876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:21:18.877057 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:21:18.877137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:21:18.878261 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:21:18.878375 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:21:18.879659 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:21:18.879761 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:21:18.896110 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:21:18.897557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:21:18.898235 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:21:18.898354 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:21:18.899749 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:21:18.899840 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:21:18.904446 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:21:18.905930 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:21:18.909386 ignition[998]: INFO : Ignition 2.20.0 Feb 13 19:21:18.909386 ignition[998]: INFO : Stage: umount Feb 13 19:21:18.910882 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:21:18.910882 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:21:18.910882 ignition[998]: INFO : umount: umount passed Feb 13 19:21:18.910882 ignition[998]: INFO : Ignition finished successfully Feb 13 19:21:18.912261 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:21:18.912385 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:21:18.916004 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:21:18.916492 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:21:18.916582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:21:18.918556 systemd[1]: Stopped target network.target - Network. Feb 13 19:21:18.919423 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:21:18.919486 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:21:18.920844 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:21:18.920884 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:21:18.922242 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:21:18.922286 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:21:18.926217 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:21:18.926262 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:21:18.927554 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:21:18.927592 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:21:18.929244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:21:18.930446 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:21:18.937546 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:21:18.937714 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:21:18.939001 systemd-networkd[764]: eth0: DHCPv6 lease lost Feb 13 19:21:18.940256 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:21:18.940311 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:21:18.942674 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:21:18.942792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:21:18.944516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:21:18.944574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:21:18.951052 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:21:18.951729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:21:18.951795 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:21:18.953295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:21:18.953337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:21:18.954827 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:21:18.954876 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:21:18.956517 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:21:18.966267 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:21:18.966410 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:21:18.969646 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:21:18.969812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:21:18.972119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:21:18.972217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:21:18.973710 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:21:18.973756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:21:18.975165 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:21:18.975222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:21:18.977461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:21:18.977508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:21:18.979576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:21:18.979622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:21:18.991135 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:21:18.992019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:21:18.992084 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:21:18.993838 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:21:18.993882 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:21:18.995345 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:21:18.995384 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:21:18.996983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:21:18.997023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:21:18.998819 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:21:18.999986 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:21:19.001473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:21:19.003285 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:21:19.013886 systemd[1]: Switching root. Feb 13 19:21:19.037834 systemd-journald[239]: Journal stopped Feb 13 19:21:19.755848 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:21:19.755904 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:21:19.756071 kernel: SELinux: policy capability open_perms=1 Feb 13 19:21:19.756083 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:21:19.756092 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:21:19.756101 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:21:19.756115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:21:19.756124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:21:19.756134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:21:19.756143 kernel: audit: type=1403 audit(1739474479.187:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:21:19.756153 systemd[1]: Successfully loaded SELinux policy in 34.008ms. Feb 13 19:21:19.756171 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.501ms. Feb 13 19:21:19.756182 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:21:19.756192 systemd[1]: Detected virtualization kvm. Feb 13 19:21:19.756202 systemd[1]: Detected architecture arm64. Feb 13 19:21:19.756212 systemd[1]: Detected first boot. Feb 13 19:21:19.756221 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:21:19.756231 zram_generator::config[1043]: No configuration found. Feb 13 19:21:19.756242 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:21:19.756258 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:21:19.756268 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:21:19.756290 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:21:19.756303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:21:19.756313 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:21:19.756323 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:21:19.756333 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:21:19.756343 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:21:19.756354 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:21:19.756728 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:21:19.756749 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:21:19.756759 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:21:19.756770 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:21:19.756780 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:21:19.756791 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:21:19.756801 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:21:19.756817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:21:19.756828 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:21:19.756840 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:21:19.756851 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:21:19.756861 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:21:19.756871 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:21:19.756881 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:21:19.756891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:21:19.756901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:21:19.756930 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:21:19.756942 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:21:19.756955 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:21:19.756972 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:21:19.756984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:21:19.756995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:21:19.757005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:21:19.757015 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:21:19.757025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:21:19.757035 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:21:19.757047 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:21:19.757057 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:21:19.757068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:21:19.757078 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:21:19.757088 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:21:19.757098 systemd[1]: Reached target machines.target - Containers. Feb 13 19:21:19.757108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:21:19.757119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:21:19.757130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:21:19.757141 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:21:19.757152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:21:19.757161 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:21:19.757172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:21:19.757183 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:21:19.757193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:21:19.757204 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:21:19.757214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:21:19.757226 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:21:19.757236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:21:19.757245 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:21:19.757255 kernel: fuse: init (API version 7.39) Feb 13 19:21:19.757265 kernel: loop: module loaded Feb 13 19:21:19.757274 kernel: ACPI: bus type drm_connector registered Feb 13 19:21:19.757283 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:21:19.757293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:21:19.757304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:21:19.757315 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:21:19.757350 systemd-journald[1110]: Collecting audit messages is disabled. Feb 13 19:21:19.757373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:21:19.757384 systemd-journald[1110]: Journal started Feb 13 19:21:19.757405 systemd-journald[1110]: Runtime Journal (/run/log/journal/2c973380005c46f39b2901d9435a7cf1) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:21:19.573888 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:21:19.592450 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:21:19.592850 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:21:19.759232 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:21:19.759276 systemd[1]: Stopped verity-setup.service. Feb 13 19:21:19.762308 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:21:19.762977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:21:19.763853 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:21:19.764835 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:21:19.765721 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:21:19.766660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:21:19.767606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:21:19.768669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:21:19.770996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:21:19.772258 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:21:19.772401 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:21:19.773593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:21:19.773738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:21:19.774902 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:21:19.775098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:21:19.776161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:21:19.776303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:21:19.777635 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:21:19.777774 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:21:19.778879 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:21:19.779059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:21:19.780187 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:21:19.781422 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:21:19.782587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:21:19.795020 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:21:19.802066 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:21:19.804055 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:21:19.804861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:21:19.804899 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:21:19.806653 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:21:19.808742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:21:19.810854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:21:19.811765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:21:19.813567 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:21:19.815459 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:21:19.816360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:21:19.820097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:21:19.821110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:21:19.823375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:21:19.824556 systemd-journald[1110]: Time spent on flushing to /var/log/journal/2c973380005c46f39b2901d9435a7cf1 is 25.337ms for 859 entries. Feb 13 19:21:19.824556 systemd-journald[1110]: System Journal (/var/log/journal/2c973380005c46f39b2901d9435a7cf1) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:21:19.856822 systemd-journald[1110]: Received client request to flush runtime journal. Feb 13 19:21:19.856859 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 19:21:19.827175 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:21:19.831316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:21:19.841989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:21:19.843579 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:21:19.844564 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:21:19.847749 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:21:19.850704 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:21:19.855027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:21:19.868934 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:21:19.872678 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Feb 13 19:21:19.872694 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Feb 13 19:21:19.875089 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:21:19.877153 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:21:19.880321 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:21:19.881680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:21:19.883051 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:21:19.894931 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 19:21:19.901454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:21:19.906282 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:21:19.906842 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:21:19.909379 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:21:19.923529 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:21:19.924930 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:21:19.941147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:21:19.955876 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 19:21:19.955899 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 19:21:19.960306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:21:19.966951 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 19:21:19.971956 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 19:21:19.975938 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:21:19.980594 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:21:19.981033 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 19:21:19.986718 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:21:19.986749 systemd[1]: Reloading... Feb 13 19:21:20.044035 zram_generator::config[1204]: No configuration found. Feb 13 19:21:20.110002 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:21:20.148157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:20.183112 systemd[1]: Reloading finished in 195 ms. Feb 13 19:21:20.214976 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:21:20.216093 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:21:20.235296 systemd[1]: Starting ensure-sysext.service... Feb 13 19:21:20.237122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:21:20.250669 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:21:20.250778 systemd[1]: Reloading... Feb 13 19:21:20.258900 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:21:20.259294 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:21:20.259891 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:21:20.260128 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 19:21:20.260180 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 19:21:20.268374 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:21:20.268386 systemd-tmpfiles[1243]: Skipping /boot Feb 13 19:21:20.275235 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:21:20.275251 systemd-tmpfiles[1243]: Skipping /boot Feb 13 19:21:20.301173 zram_generator::config[1270]: No configuration found. Feb 13 19:21:20.383686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:20.418896 systemd[1]: Reloading finished in 167 ms. Feb 13 19:21:20.437816 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:21:20.450302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:21:20.457517 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:21:20.459677 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:21:20.461669 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:21:20.465632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:21:20.469039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:21:20.473490 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:21:20.477270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:21:20.480227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:21:20.486606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:21:20.491884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:21:20.492735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:21:20.497075 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Feb 13 19:21:20.498731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:21:20.502646 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:21:20.504186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:21:20.504311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:21:20.505751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:21:20.505862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:21:20.507711 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:21:20.507828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:21:20.513678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:21:20.516980 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:21:20.530263 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:21:20.541584 systemd[1]: Finished ensure-sysext.service. Feb 13 19:21:20.545791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:21:20.552939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1333) Feb 13 19:21:20.571055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:21:20.578509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:21:20.582180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:21:20.585365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:21:20.588209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:21:20.592093 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:21:20.595115 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:21:20.598072 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:21:20.599349 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:21:20.599666 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:21:20.602373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:21:20.602743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:21:20.604318 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:21:20.604729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:21:20.606276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:21:20.606370 augenrules[1378]: No rules Feb 13 19:21:20.607088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:21:20.608257 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:21:20.608425 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:21:20.609788 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:21:20.610027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:21:20.611427 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:21:20.629208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:21:20.635974 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:21:20.640098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:21:20.640206 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 19:21:20.642029 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:21:20.642065 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:21:20.643808 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:21:20.643874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:21:20.648048 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 19:21:20.650304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:21:20.651272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:21:20.664287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:21:20.685369 systemd-networkd[1373]: lo: Link UP Feb 13 19:21:20.685651 systemd-networkd[1373]: lo: Gained carrier Feb 13 19:21:20.686539 systemd-networkd[1373]: Enumeration completed Feb 13 19:21:20.687125 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:21:20.687991 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:21:20.691173 systemd-networkd[1373]: eth0: Link UP Feb 13 19:21:20.691182 systemd-networkd[1373]: eth0: Gained carrier Feb 13 19:21:20.691196 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:21:20.699187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:21:20.700106 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:21:20.701187 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:21:20.703515 systemd[1]: Reached target network.target - Network. Feb 13 19:21:20.704300 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:21:20.706394 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:21:20.711950 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:21:20.713057 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:21:20.715111 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Feb 13 19:21:20.715116 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:21:20.716573 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:21:20.716641 systemd-timesyncd[1374]: Initial clock synchronization to Thu 2025-02-13 19:21:20.953976 UTC. Feb 13 19:21:20.734032 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:21:20.740985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:21:20.766407 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:21:20.767532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:21:20.768371 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:21:20.769253 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:21:20.770148 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:21:20.771177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:21:20.772028 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:21:20.772888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:21:20.773746 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:21:20.773782 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:21:20.774452 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:21:20.775850 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:21:20.777862 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:21:20.785889 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:21:20.787771 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:21:20.789049 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:21:20.789899 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:21:20.790638 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:21:20.791355 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:21:20.791388 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:21:20.792196 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:21:20.793793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:21:20.795144 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:21:20.797072 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:21:20.798728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:21:20.799517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:21:20.801157 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:21:20.810949 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:21:20.813110 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:21:20.814532 jq[1412]: false Feb 13 19:21:20.818081 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:21:20.820514 extend-filesystems[1413]: Found loop3 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found loop4 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found loop5 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda1 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda2 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda3 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found usr Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda4 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda6 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda7 Feb 13 19:21:20.820514 extend-filesystems[1413]: Found vda9 Feb 13 19:21:20.820514 extend-filesystems[1413]: Checking size of /dev/vda9 Feb 13 19:21:20.850618 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:21:20.826183 dbus-daemon[1411]: [system] SELinux support is enabled Feb 13 19:21:20.857166 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1345) Feb 13 19:21:20.827308 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:21:20.857289 extend-filesystems[1413]: Resized partition /dev/vda9 Feb 13 19:21:20.830068 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:21:20.858134 extend-filesystems[1431]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:21:20.830534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:21:20.831156 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:21:20.859295 jq[1433]: true Feb 13 19:21:20.833223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:21:20.834860 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:21:20.838277 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:21:20.855253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:21:20.855407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:21:20.855647 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:21:20.855782 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:21:20.858294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:21:20.858468 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:21:20.880933 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:21:20.887602 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:21:20.899512 jq[1438]: true Feb 13 19:21:20.899673 extend-filesystems[1431]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:21:20.899673 extend-filesystems[1431]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:21:20.899673 extend-filesystems[1431]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:21:20.904087 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Feb 13 19:21:20.909008 update_engine[1432]: I20250213 19:21:20.900524 1432 main.cc:92] Flatcar Update Engine starting Feb 13 19:21:20.909008 update_engine[1432]: I20250213 19:21:20.904435 1432 update_check_scheduler.cc:74] Next update check in 6m27s Feb 13 19:21:20.900416 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:21:20.901617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:21:20.903133 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:21:20.905241 systemd-logind[1426]: New seat seat0. Feb 13 19:21:20.909117 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:21:20.914565 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:21:20.920530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:21:20.920687 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:21:20.921727 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:21:20.921839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:21:20.924489 tar[1437]: linux-arm64/helm Feb 13 19:21:20.938562 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:21:20.962397 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:21:20.959560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:21:20.961711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:21:20.993282 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:21:21.088822 containerd[1445]: time="2025-02-13T19:21:21.088474326Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:21:21.118337 containerd[1445]: time="2025-02-13T19:21:21.118264188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.119966 containerd[1445]: time="2025-02-13T19:21:21.119837804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:21:21.119966 containerd[1445]: time="2025-02-13T19:21:21.119871537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:21:21.119966 containerd[1445]: time="2025-02-13T19:21:21.119888671Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:21:21.120096 containerd[1445]: time="2025-02-13T19:21:21.120076487Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:21:21.120141 containerd[1445]: time="2025-02-13T19:21:21.120100500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120176 containerd[1445]: time="2025-02-13T19:21:21.120160263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120199 containerd[1445]: time="2025-02-13T19:21:21.120175585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120357 containerd[1445]: time="2025-02-13T19:21:21.120339800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120377 containerd[1445]: time="2025-02-13T19:21:21.120358952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120377 containerd[1445]: time="2025-02-13T19:21:21.120372256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120418 containerd[1445]: time="2025-02-13T19:21:21.120382017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120558 containerd[1445]: time="2025-02-13T19:21:21.120469088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120686 containerd[1445]: time="2025-02-13T19:21:21.120666377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120786 containerd[1445]: time="2025-02-13T19:21:21.120768070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:21:21.120786 containerd[1445]: time="2025-02-13T19:21:21.120785080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:21:21.120873 containerd[1445]: time="2025-02-13T19:21:21.120858435Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:21:21.120916 containerd[1445]: time="2025-02-13T19:21:21.120903124Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:21:21.124333 containerd[1445]: time="2025-02-13T19:21:21.124306422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:21:21.124389 containerd[1445]: time="2025-02-13T19:21:21.124360789Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:21:21.124389 containerd[1445]: time="2025-02-13T19:21:21.124377100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:21:21.124434 containerd[1445]: time="2025-02-13T19:21:21.124392010Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:21:21.124434 containerd[1445]: time="2025-02-13T19:21:21.124406219Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:21:21.124569 containerd[1445]: time="2025-02-13T19:21:21.124547411Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:21:21.124796 containerd[1445]: time="2025-02-13T19:21:21.124778844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:21:21.124891 containerd[1445]: time="2025-02-13T19:21:21.124874482Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:21:21.124915 containerd[1445]: time="2025-02-13T19:21:21.124893593Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:21:21.124915 containerd[1445]: time="2025-02-13T19:21:21.124908462Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:21:21.124990 containerd[1445]: time="2025-02-13T19:21:21.124921519Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125080 containerd[1445]: time="2025-02-13T19:21:21.124933628Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125080 containerd[1445]: time="2025-02-13T19:21:21.125035361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125080 containerd[1445]: time="2025-02-13T19:21:21.125049901Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125080 containerd[1445]: time="2025-02-13T19:21:21.125064646Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125080 containerd[1445]: time="2025-02-13T19:21:21.125077084Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125089358Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125101509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125120743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125140143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125153199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125166091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125177788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125189898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125201224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125218 containerd[1445]: time="2025-02-13T19:21:21.125213457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125225978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125239899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125252297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125264406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125276145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125289695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125309589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125322522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125403 containerd[1445]: time="2025-02-13T19:21:21.125332572Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:21:21.125567 containerd[1445]: time="2025-02-13T19:21:21.125516969Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:21:21.125567 containerd[1445]: time="2025-02-13T19:21:21.125533979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:21:21.125567 containerd[1445]: time="2025-02-13T19:21:21.125544812Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:21:21.125626 containerd[1445]: time="2025-02-13T19:21:21.125567795Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:21:21.125626 containerd[1445]: time="2025-02-13T19:21:21.125577927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.125626 containerd[1445]: time="2025-02-13T19:21:21.125589748Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:21:21.125626 containerd[1445]: time="2025-02-13T19:21:21.125599097Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:21:21.125626 containerd[1445]: time="2025-02-13T19:21:21.125609971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:21:21.126098 containerd[1445]: time="2025-02-13T19:21:21.125981648Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:21:21.126098 containerd[1445]: time="2025-02-13T19:21:21.126033627Z" level=info msg="Connect containerd service" Feb 13 19:21:21.126098 containerd[1445]: time="2025-02-13T19:21:21.126067854Z" level=info msg="using legacy CRI server" Feb 13 19:21:21.126098 containerd[1445]: time="2025-02-13T19:21:21.126074939Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:21:21.126325 containerd[1445]: time="2025-02-13T19:21:21.126312921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:21:21.127058 containerd[1445]: time="2025-02-13T19:21:21.127032635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:21:21.127246 containerd[1445]: time="2025-02-13T19:21:21.127219174Z" level=info msg="Start subscribing containerd event" Feb 13 19:21:21.127285 containerd[1445]: time="2025-02-13T19:21:21.127261597Z" level=info msg="Start recovering state" Feb 13 19:21:21.127571 containerd[1445]: time="2025-02-13T19:21:21.127319919Z" level=info msg="Start event monitor" Feb 13 19:21:21.127571 containerd[1445]: time="2025-02-13T19:21:21.127341707Z" level=info msg="Start snapshots syncer" Feb 13 19:21:21.127571 containerd[1445]: time="2025-02-13T19:21:21.127351592Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:21:21.127571 containerd[1445]: time="2025-02-13T19:21:21.127359706Z" level=info msg="Start streaming server" Feb 13 19:21:21.128096 containerd[1445]: time="2025-02-13T19:21:21.128010060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:21:21.128339 containerd[1445]: time="2025-02-13T19:21:21.128320080Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:21:21.130469 containerd[1445]: time="2025-02-13T19:21:21.130445900Z" level=info msg="containerd successfully booted in 0.042957s" Feb 13 19:21:21.130536 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:21:21.259846 tar[1437]: linux-arm64/LICENSE Feb 13 19:21:21.259846 tar[1437]: linux-arm64/README.md Feb 13 19:21:21.274498 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:21:22.356947 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:21:22.375789 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:21:22.385254 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:21:22.390486 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:21:22.391985 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:21:22.394410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:21:22.405981 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:21:22.408506 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:21:22.410443 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:21:22.411545 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:21:22.468125 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 19:21:22.472018 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:21:22.474200 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:21:22.487251 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:21:22.489633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:22.491620 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:21:22.507526 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:21:22.507757 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:21:22.509608 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:21:22.511454 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:21:22.984088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:22.985374 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:21:22.988318 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:21:22.990015 systemd[1]: Startup finished in 572ms (kernel) + 5.489s (initrd) + 3.836s (userspace) = 9.898s. Feb 13 19:21:23.514076 kubelet[1525]: E0213 19:21:23.514022 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:21:23.516504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:21:23.516677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:21:26.473477 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:21:26.474633 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:59548.service - OpenSSH per-connection server daemon (10.0.0.1:59548). Feb 13 19:21:26.542007 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 59548 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:26.544188 sshd-session[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:26.554353 systemd-logind[1426]: New session 1 of user core. Feb 13 19:21:26.555316 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:21:26.563193 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:21:26.573972 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:21:26.576280 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:21:26.583296 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:21:26.651948 systemd[1543]: Queued start job for default target default.target. Feb 13 19:21:26.661945 systemd[1543]: Created slice app.slice - User Application Slice. Feb 13 19:21:26.661991 systemd[1543]: Reached target paths.target - Paths. Feb 13 19:21:26.662004 systemd[1543]: Reached target timers.target - Timers. Feb 13 19:21:26.663202 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:21:26.672653 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:21:26.672714 systemd[1543]: Reached target sockets.target - Sockets. Feb 13 19:21:26.672726 systemd[1543]: Reached target basic.target - Basic System. Feb 13 19:21:26.672761 systemd[1543]: Reached target default.target - Main User Target. Feb 13 19:21:26.672787 systemd[1543]: Startup finished in 84ms. Feb 13 19:21:26.673030 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:21:26.674288 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:21:26.735527 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:59562.service - OpenSSH per-connection server daemon (10.0.0.1:59562). Feb 13 19:21:26.780636 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 59562 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:26.782027 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:26.786546 systemd-logind[1426]: New session 2 of user core. Feb 13 19:21:26.796093 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:21:26.849365 sshd[1556]: Connection closed by 10.0.0.1 port 59562 Feb 13 19:21:26.849842 sshd-session[1554]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:26.865555 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:59562.service: Deactivated successfully. Feb 13 19:21:26.867561 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:21:26.869110 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:21:26.871236 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Feb 13 19:21:26.872410 systemd-logind[1426]: Removed session 2. Feb 13 19:21:26.912340 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:26.913601 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:26.918137 systemd-logind[1426]: New session 3 of user core. Feb 13 19:21:26.927114 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:21:26.976495 sshd[1563]: Connection closed by 10.0.0.1 port 59566 Feb 13 19:21:26.977136 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:26.990854 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:59566.service: Deactivated successfully. Feb 13 19:21:26.992606 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:21:27.000547 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:21:27.017412 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:59578.service - OpenSSH per-connection server daemon (10.0.0.1:59578). Feb 13 19:21:27.018718 systemd-logind[1426]: Removed session 3. Feb 13 19:21:27.053054 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 59578 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:27.054252 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:27.060160 systemd-logind[1426]: New session 4 of user core. Feb 13 19:21:27.072140 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:21:27.125669 sshd[1570]: Connection closed by 10.0.0.1 port 59578 Feb 13 19:21:27.125553 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:27.134402 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:59578.service: Deactivated successfully. Feb 13 19:21:27.135850 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:21:27.138428 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:21:27.144198 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:59594.service - OpenSSH per-connection server daemon (10.0.0.1:59594). Feb 13 19:21:27.144983 systemd-logind[1426]: Removed session 4. Feb 13 19:21:27.180061 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 59594 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:27.181358 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:27.185476 systemd-logind[1426]: New session 5 of user core. Feb 13 19:21:27.201088 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:21:27.259530 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:21:27.259828 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:21:27.278976 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:27.280476 sshd[1577]: Connection closed by 10.0.0.1 port 59594 Feb 13 19:21:27.281007 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:27.289551 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:59594.service: Deactivated successfully. Feb 13 19:21:27.292228 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:21:27.293576 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:21:27.295077 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:59608.service - OpenSSH per-connection server daemon (10.0.0.1:59608). Feb 13 19:21:27.295723 systemd-logind[1426]: Removed session 5. Feb 13 19:21:27.334949 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 59608 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:27.336245 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:27.340240 systemd-logind[1426]: New session 6 of user core. Feb 13 19:21:27.352089 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:21:27.403122 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:21:27.403405 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:21:27.406412 sudo[1587]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:27.410888 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:21:27.411175 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:21:27.427272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:21:27.449604 augenrules[1609]: No rules Feb 13 19:21:27.450758 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:21:27.450968 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:21:27.452115 sudo[1586]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:27.453822 sshd[1585]: Connection closed by 10.0.0.1 port 59608 Feb 13 19:21:27.453750 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:27.467225 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:59608.service: Deactivated successfully. Feb 13 19:21:27.468605 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:21:27.471011 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:21:27.472088 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:59612.service - OpenSSH per-connection server daemon (10.0.0.1:59612). Feb 13 19:21:27.472806 systemd-logind[1426]: Removed session 6. Feb 13 19:21:27.511755 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 59612 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:21:27.512825 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:27.517194 systemd-logind[1426]: New session 7 of user core. Feb 13 19:21:27.534102 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:21:27.586817 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:21:27.587138 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:21:27.909299 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:21:27.909844 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:21:28.162261 dockerd[1642]: time="2025-02-13T19:21:28.162128536Z" level=info msg="Starting up" Feb 13 19:21:28.318360 dockerd[1642]: time="2025-02-13T19:21:28.318309217Z" level=info msg="Loading containers: start." Feb 13 19:21:28.476968 kernel: Initializing XFRM netlink socket Feb 13 19:21:28.548203 systemd-networkd[1373]: docker0: Link UP Feb 13 19:21:28.590329 dockerd[1642]: time="2025-02-13T19:21:28.590285069Z" level=info msg="Loading containers: done." Feb 13 19:21:28.603052 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2065720858-merged.mount: Deactivated successfully. Feb 13 19:21:28.606065 dockerd[1642]: time="2025-02-13T19:21:28.606019215Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:21:28.606165 dockerd[1642]: time="2025-02-13T19:21:28.606116982Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:21:28.606236 dockerd[1642]: time="2025-02-13T19:21:28.606219362Z" level=info msg="Daemon has completed initialization" Feb 13 19:21:28.639045 dockerd[1642]: time="2025-02-13T19:21:28.638904802Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:21:28.639180 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:21:29.279882 containerd[1445]: time="2025-02-13T19:21:29.279799140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:21:30.065620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount498367365.mount: Deactivated successfully. Feb 13 19:21:31.477831 containerd[1445]: time="2025-02-13T19:21:31.477759325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:31.478307 containerd[1445]: time="2025-02-13T19:21:31.478259562Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:21:31.479961 containerd[1445]: time="2025-02-13T19:21:31.479009172Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:31.482972 containerd[1445]: time="2025-02-13T19:21:31.482941853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:31.484081 containerd[1445]: time="2025-02-13T19:21:31.483988526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.204145605s" Feb 13 19:21:31.484081 containerd[1445]: time="2025-02-13T19:21:31.484021743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:21:31.503135 containerd[1445]: time="2025-02-13T19:21:31.503084091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:21:33.305984 containerd[1445]: time="2025-02-13T19:21:33.305932528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:33.307209 containerd[1445]: time="2025-02-13T19:21:33.307147149Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:21:33.308630 containerd[1445]: time="2025-02-13T19:21:33.308571979Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:33.311439 containerd[1445]: time="2025-02-13T19:21:33.311383051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:33.312674 containerd[1445]: time="2025-02-13T19:21:33.312639440Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.80939875s" Feb 13 19:21:33.312720 containerd[1445]: time="2025-02-13T19:21:33.312675696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:21:33.330662 containerd[1445]: time="2025-02-13T19:21:33.330574222Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:21:33.677870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:21:33.690093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:33.779185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:33.782613 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:21:33.824486 kubelet[1923]: E0213 19:21:33.824438 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:21:33.827218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:21:33.827353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:21:34.529400 containerd[1445]: time="2025-02-13T19:21:34.529355472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:34.530401 containerd[1445]: time="2025-02-13T19:21:34.530336861Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:21:34.531238 containerd[1445]: time="2025-02-13T19:21:34.530947600Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:34.534003 containerd[1445]: time="2025-02-13T19:21:34.533945205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:34.535157 containerd[1445]: time="2025-02-13T19:21:34.535123941Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.204510895s" Feb 13 19:21:34.535216 containerd[1445]: time="2025-02-13T19:21:34.535163346Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:21:34.553508 containerd[1445]: time="2025-02-13T19:21:34.553473664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:21:35.593541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027939520.mount: Deactivated successfully. Feb 13 19:21:35.914686 containerd[1445]: time="2025-02-13T19:21:35.914366727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:35.915529 containerd[1445]: time="2025-02-13T19:21:35.914972006Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:21:35.916337 containerd[1445]: time="2025-02-13T19:21:35.916289169Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:35.918932 containerd[1445]: time="2025-02-13T19:21:35.918886006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:35.919543 containerd[1445]: time="2025-02-13T19:21:35.919472881Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.365962269s" Feb 13 19:21:35.919543 containerd[1445]: time="2025-02-13T19:21:35.919496348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:21:35.938582 containerd[1445]: time="2025-02-13T19:21:35.938541555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:21:36.528903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731766479.mount: Deactivated successfully. Feb 13 19:21:37.278187 containerd[1445]: time="2025-02-13T19:21:37.278138622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.279338 containerd[1445]: time="2025-02-13T19:21:37.279055505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:21:37.280181 containerd[1445]: time="2025-02-13T19:21:37.280141178Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.286717 containerd[1445]: time="2025-02-13T19:21:37.286685119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.287796 containerd[1445]: time="2025-02-13T19:21:37.287762282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.349185615s" Feb 13 19:21:37.287796 containerd[1445]: time="2025-02-13T19:21:37.287797164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:21:37.306162 containerd[1445]: time="2025-02-13T19:21:37.306121298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:21:37.715815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402634493.mount: Deactivated successfully. Feb 13 19:21:37.720850 containerd[1445]: time="2025-02-13T19:21:37.720619799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.721411 containerd[1445]: time="2025-02-13T19:21:37.721163338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:21:37.722100 containerd[1445]: time="2025-02-13T19:21:37.722046302Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.724369 containerd[1445]: time="2025-02-13T19:21:37.724309448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:37.725239 containerd[1445]: time="2025-02-13T19:21:37.725154721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 418.991998ms" Feb 13 19:21:37.725239 containerd[1445]: time="2025-02-13T19:21:37.725186673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:21:37.745219 containerd[1445]: time="2025-02-13T19:21:37.745157359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:21:38.286475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718705800.mount: Deactivated successfully. Feb 13 19:21:40.143409 containerd[1445]: time="2025-02-13T19:21:40.143352938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:40.145323 containerd[1445]: time="2025-02-13T19:21:40.145261971Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:21:40.146107 containerd[1445]: time="2025-02-13T19:21:40.146065534Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:40.150195 containerd[1445]: time="2025-02-13T19:21:40.149584299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:40.150958 containerd[1445]: time="2025-02-13T19:21:40.150857161Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.405663123s" Feb 13 19:21:40.150958 containerd[1445]: time="2025-02-13T19:21:40.150893647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:21:43.927828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:21:43.937132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:44.056319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:44.060288 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:21:44.095817 kubelet[2147]: E0213 19:21:44.095772 2147 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:21:44.098535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:21:44.098672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:21:44.118982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:44.135195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:44.155866 systemd[1]: Reloading requested from client PID 2162 ('systemctl') (unit session-7.scope)... Feb 13 19:21:44.155881 systemd[1]: Reloading... Feb 13 19:21:44.215945 zram_generator::config[2204]: No configuration found. Feb 13 19:21:44.356144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:44.406870 systemd[1]: Reloading finished in 250 ms. Feb 13 19:21:44.446365 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:44.449063 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:21:44.449241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:44.450647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:44.544512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:44.548384 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:21:44.588858 kubelet[2248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:44.588858 kubelet[2248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:21:44.588858 kubelet[2248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:44.590093 kubelet[2248]: I0213 19:21:44.590025 2248 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:21:46.278938 kubelet[2248]: I0213 19:21:46.278557 2248 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:21:46.278938 kubelet[2248]: I0213 19:21:46.278589 2248 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:21:46.278938 kubelet[2248]: I0213 19:21:46.278798 2248 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:21:46.312590 kubelet[2248]: E0213 19:21:46.312563 2248 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.312808 kubelet[2248]: I0213 19:21:46.312775 2248 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:21:46.322655 kubelet[2248]: I0213 19:21:46.322631 2248 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:21:46.323933 kubelet[2248]: I0213 19:21:46.323230 2248 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:21:46.323933 kubelet[2248]: I0213 19:21:46.323259 2248 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:21:46.323933 kubelet[2248]: I0213 19:21:46.323480 2248 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:21:46.323933 kubelet[2248]: I0213 19:21:46.323488 2248 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:21:46.323933 kubelet[2248]: I0213 19:21:46.323719 2248 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:46.326461 kubelet[2248]: I0213 19:21:46.326433 2248 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:21:46.326461 kubelet[2248]: I0213 19:21:46.326463 2248 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:21:46.326934 kubelet[2248]: I0213 19:21:46.326771 2248 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:21:46.326934 kubelet[2248]: I0213 19:21:46.326928 2248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:21:46.328163 kubelet[2248]: W0213 19:21:46.328118 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.328263 kubelet[2248]: I0213 19:21:46.328241 2248 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:21:46.328352 kubelet[2248]: W0213 19:21:46.328184 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.328392 kubelet[2248]: E0213 19:21:46.328376 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.328446 kubelet[2248]: E0213 19:21:46.328432 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.328602 kubelet[2248]: I0213 19:21:46.328590 2248 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:21:46.328702 kubelet[2248]: W0213 19:21:46.328691 2248 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:21:46.329456 kubelet[2248]: I0213 19:21:46.329443 2248 server.go:1264] "Started kubelet" Feb 13 19:21:46.330289 kubelet[2248]: I0213 19:21:46.329623 2248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:21:46.330289 kubelet[2248]: I0213 19:21:46.329933 2248 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:21:46.330289 kubelet[2248]: I0213 19:21:46.329975 2248 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:21:46.330869 kubelet[2248]: I0213 19:21:46.330525 2248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:21:46.330965 kubelet[2248]: I0213 19:21:46.330935 2248 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:21:46.332836 kubelet[2248]: E0213 19:21:46.332528 2248 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dad8ed9d0766 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:21:46.329425766 +0000 UTC m=+1.778124456,LastTimestamp:2025-02-13 19:21:46.329425766 +0000 UTC m=+1.778124456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:21:46.333601 kubelet[2248]: E0213 19:21:46.333484 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:46.333656 kubelet[2248]: I0213 19:21:46.333632 2248 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:21:46.333749 kubelet[2248]: I0213 19:21:46.333725 2248 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:21:46.333867 kubelet[2248]: I0213 19:21:46.333850 2248 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:21:46.334189 kubelet[2248]: E0213 19:21:46.333962 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Feb 13 19:21:46.334189 kubelet[2248]: W0213 19:21:46.334113 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.334189 kubelet[2248]: E0213 19:21:46.334149 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.335701 kubelet[2248]: I0213 19:21:46.335674 2248 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:21:46.335701 kubelet[2248]: I0213 19:21:46.335692 2248 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:21:46.335832 kubelet[2248]: I0213 19:21:46.335760 2248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:21:46.340109 kubelet[2248]: E0213 19:21:46.339998 2248 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:21:46.344389 kubelet[2248]: I0213 19:21:46.344335 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:21:46.345733 kubelet[2248]: I0213 19:21:46.345363 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:21:46.345733 kubelet[2248]: I0213 19:21:46.345511 2248 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:21:46.345733 kubelet[2248]: I0213 19:21:46.345528 2248 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:21:46.345733 kubelet[2248]: E0213 19:21:46.345584 2248 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:21:46.346776 kubelet[2248]: W0213 19:21:46.346684 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.346776 kubelet[2248]: E0213 19:21:46.346731 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:46.347884 kubelet[2248]: I0213 19:21:46.347705 2248 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:21:46.347884 kubelet[2248]: I0213 19:21:46.347720 2248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:21:46.347884 kubelet[2248]: I0213 19:21:46.347734 2248 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:46.435094 kubelet[2248]: I0213 19:21:46.435062 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:46.435411 kubelet[2248]: E0213 19:21:46.435389 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Feb 13 19:21:46.440166 kubelet[2248]: I0213 19:21:46.440136 2248 policy_none.go:49] "None policy: Start" Feb 13 19:21:46.440929 kubelet[2248]: I0213 19:21:46.440879 2248 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:21:46.441009 kubelet[2248]: I0213 19:21:46.440943 2248 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:21:46.445819 kubelet[2248]: E0213 19:21:46.445798 2248 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:21:46.446198 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:21:46.456130 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:21:46.458728 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:21:46.470607 kubelet[2248]: I0213 19:21:46.470569 2248 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:21:46.470828 kubelet[2248]: I0213 19:21:46.470749 2248 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:21:46.470865 kubelet[2248]: I0213 19:21:46.470850 2248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:21:46.471954 kubelet[2248]: E0213 19:21:46.471928 2248 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:21:46.535508 kubelet[2248]: E0213 19:21:46.535378 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Feb 13 19:21:46.636619 kubelet[2248]: I0213 19:21:46.636569 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:46.636966 kubelet[2248]: E0213 19:21:46.636919 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Feb 13 19:21:46.646079 kubelet[2248]: I0213 19:21:46.646014 2248 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:21:46.646967 kubelet[2248]: I0213 19:21:46.646935 2248 topology_manager.go:215] "Topology Admit Handler" podUID="288f1ad501c6fccda5ee26bbb290c085" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:21:46.647817 kubelet[2248]: I0213 19:21:46.647775 2248 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:21:46.652990 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:21:46.663717 systemd[1]: Created slice kubepods-burstable-pod288f1ad501c6fccda5ee26bbb290c085.slice - libcontainer container kubepods-burstable-pod288f1ad501c6fccda5ee26bbb290c085.slice. Feb 13 19:21:46.667045 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:21:46.835537 kubelet[2248]: I0213 19:21:46.835398 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:46.835537 kubelet[2248]: I0213 19:21:46.835447 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:46.835537 kubelet[2248]: I0213 19:21:46.835472 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:46.835537 kubelet[2248]: I0213 19:21:46.835493 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:46.835537 kubelet[2248]: I0213 19:21:46.835509 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:21:46.835812 kubelet[2248]: I0213 19:21:46.835538 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:46.835812 kubelet[2248]: I0213 19:21:46.835554 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:46.835812 kubelet[2248]: I0213 19:21:46.835592 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:46.835812 kubelet[2248]: I0213 19:21:46.835633 2248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:46.936210 kubelet[2248]: E0213 19:21:46.936156 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Feb 13 19:21:46.961679 kubelet[2248]: E0213 19:21:46.961634 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:46.962403 containerd[1445]: time="2025-02-13T19:21:46.962356338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:46.966476 kubelet[2248]: E0213 19:21:46.966445 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:46.966790 containerd[1445]: time="2025-02-13T19:21:46.966762857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:288f1ad501c6fccda5ee26bbb290c085,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:46.969173 kubelet[2248]: E0213 19:21:46.969142 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:46.969776 containerd[1445]: time="2025-02-13T19:21:46.969726017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:47.038420 kubelet[2248]: I0213 19:21:47.038280 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:47.038798 kubelet[2248]: E0213 19:21:47.038754 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Feb 13 19:21:47.275199 kubelet[2248]: W0213 19:21:47.275133 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.275199 kubelet[2248]: E0213 19:21:47.275195 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.472119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704328100.mount: Deactivated successfully. Feb 13 19:21:47.476949 containerd[1445]: time="2025-02-13T19:21:47.476892786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:47.479235 containerd[1445]: time="2025-02-13T19:21:47.479157753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:21:47.479918 containerd[1445]: time="2025-02-13T19:21:47.479876775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:47.481375 containerd[1445]: time="2025-02-13T19:21:47.481312858Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:47.481980 containerd[1445]: time="2025-02-13T19:21:47.481903242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:21:47.482760 containerd[1445]: time="2025-02-13T19:21:47.482736811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:47.483328 containerd[1445]: time="2025-02-13T19:21:47.483305214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:21:47.485170 containerd[1445]: time="2025-02-13T19:21:47.485119005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:47.486839 containerd[1445]: time="2025-02-13T19:21:47.486769446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.333586ms" Feb 13 19:21:47.488130 containerd[1445]: time="2025-02-13T19:21:47.488099151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.277994ms" Feb 13 19:21:47.490808 containerd[1445]: time="2025-02-13T19:21:47.490777659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.992742ms" Feb 13 19:21:47.621349 containerd[1445]: time="2025-02-13T19:21:47.619256080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:47.621349 containerd[1445]: time="2025-02-13T19:21:47.619321140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:47.621349 containerd[1445]: time="2025-02-13T19:21:47.619336034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.621349 containerd[1445]: time="2025-02-13T19:21:47.619406299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.623159 containerd[1445]: time="2025-02-13T19:21:47.623067432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:47.623873 containerd[1445]: time="2025-02-13T19:21:47.623797105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:47.623933 containerd[1445]: time="2025-02-13T19:21:47.623886147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:47.623968 containerd[1445]: time="2025-02-13T19:21:47.623940637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.624100 containerd[1445]: time="2025-02-13T19:21:47.624049097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.625118 containerd[1445]: time="2025-02-13T19:21:47.623535824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:47.625118 containerd[1445]: time="2025-02-13T19:21:47.624441458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.625118 containerd[1445]: time="2025-02-13T19:21:47.624532262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:47.645090 systemd[1]: Started cri-containerd-3c4aaee4c5e544cd26f4d90f01e65aba1a07b9424b26cf4389962a849c21b5a9.scope - libcontainer container 3c4aaee4c5e544cd26f4d90f01e65aba1a07b9424b26cf4389962a849c21b5a9. Feb 13 19:21:47.646675 systemd[1]: Started cri-containerd-586d42a884296955033a420de65f54727b8964e4f53cf1d0a137bd9101d2dbea.scope - libcontainer container 586d42a884296955033a420de65f54727b8964e4f53cf1d0a137bd9101d2dbea. Feb 13 19:21:47.648791 kubelet[2248]: W0213 19:21:47.648697 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.648791 kubelet[2248]: E0213 19:21:47.648740 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.650350 systemd[1]: Started cri-containerd-4effeedc49099569487b3c80e2a5dfa2e98db23a07064c7bbb4c638fbc03787d.scope - libcontainer container 4effeedc49099569487b3c80e2a5dfa2e98db23a07064c7bbb4c638fbc03787d. Feb 13 19:21:47.678896 containerd[1445]: time="2025-02-13T19:21:47.678819963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:288f1ad501c6fccda5ee26bbb290c085,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c4aaee4c5e544cd26f4d90f01e65aba1a07b9424b26cf4389962a849c21b5a9\"" Feb 13 19:21:47.680822 kubelet[2248]: E0213 19:21:47.680795 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:47.681049 containerd[1445]: time="2025-02-13T19:21:47.680975829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"586d42a884296955033a420de65f54727b8964e4f53cf1d0a137bd9101d2dbea\"" Feb 13 19:21:47.682385 kubelet[2248]: E0213 19:21:47.682357 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:47.684364 containerd[1445]: time="2025-02-13T19:21:47.684289843Z" level=info msg="CreateContainer within sandbox \"3c4aaee4c5e544cd26f4d90f01e65aba1a07b9424b26cf4389962a849c21b5a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:21:47.684486 containerd[1445]: time="2025-02-13T19:21:47.684348657Z" level=info msg="CreateContainer within sandbox \"586d42a884296955033a420de65f54727b8964e4f53cf1d0a137bd9101d2dbea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:21:47.687684 containerd[1445]: time="2025-02-13T19:21:47.687622354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"4effeedc49099569487b3c80e2a5dfa2e98db23a07064c7bbb4c638fbc03787d\"" Feb 13 19:21:47.688282 kubelet[2248]: E0213 19:21:47.688255 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:47.690232 containerd[1445]: time="2025-02-13T19:21:47.690171583Z" level=info msg="CreateContainer within sandbox \"4effeedc49099569487b3c80e2a5dfa2e98db23a07064c7bbb4c638fbc03787d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:21:47.700526 containerd[1445]: time="2025-02-13T19:21:47.700486807Z" level=info msg="CreateContainer within sandbox \"586d42a884296955033a420de65f54727b8964e4f53cf1d0a137bd9101d2dbea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c51a39e72e1250eb8fb7fa3f4b73dd986a2ea9651034e71804411f57832431d\"" Feb 13 19:21:47.701647 containerd[1445]: time="2025-02-13T19:21:47.701615487Z" level=info msg="StartContainer for \"7c51a39e72e1250eb8fb7fa3f4b73dd986a2ea9651034e71804411f57832431d\"" Feb 13 19:21:47.706177 containerd[1445]: time="2025-02-13T19:21:47.706120558Z" level=info msg="CreateContainer within sandbox \"3c4aaee4c5e544cd26f4d90f01e65aba1a07b9424b26cf4389962a849c21b5a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"615d0caa4a65a33560e800a282c8db3127b4fa4a0461b2198c9abd6dad0c6269\"" Feb 13 19:21:47.706876 containerd[1445]: time="2025-02-13T19:21:47.706605044Z" level=info msg="StartContainer for \"615d0caa4a65a33560e800a282c8db3127b4fa4a0461b2198c9abd6dad0c6269\"" Feb 13 19:21:47.708520 containerd[1445]: time="2025-02-13T19:21:47.708479211Z" level=info msg="CreateContainer within sandbox \"4effeedc49099569487b3c80e2a5dfa2e98db23a07064c7bbb4c638fbc03787d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49b6b4b98cee783c13a36663c372a63b0d5fb6daabc0b1e37974b17602f8f09b\"" Feb 13 19:21:47.709003 containerd[1445]: time="2025-02-13T19:21:47.708969223Z" level=info msg="StartContainer for \"49b6b4b98cee783c13a36663c372a63b0d5fb6daabc0b1e37974b17602f8f09b\"" Feb 13 19:21:47.731092 systemd[1]: Started cri-containerd-7c51a39e72e1250eb8fb7fa3f4b73dd986a2ea9651034e71804411f57832431d.scope - libcontainer container 7c51a39e72e1250eb8fb7fa3f4b73dd986a2ea9651034e71804411f57832431d. Feb 13 19:21:47.734963 systemd[1]: Started cri-containerd-49b6b4b98cee783c13a36663c372a63b0d5fb6daabc0b1e37974b17602f8f09b.scope - libcontainer container 49b6b4b98cee783c13a36663c372a63b0d5fb6daabc0b1e37974b17602f8f09b. Feb 13 19:21:47.736004 systemd[1]: Started cri-containerd-615d0caa4a65a33560e800a282c8db3127b4fa4a0461b2198c9abd6dad0c6269.scope - libcontainer container 615d0caa4a65a33560e800a282c8db3127b4fa4a0461b2198c9abd6dad0c6269. Feb 13 19:21:47.736750 kubelet[2248]: E0213 19:21:47.736673 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Feb 13 19:21:47.784653 containerd[1445]: time="2025-02-13T19:21:47.784023659Z" level=info msg="StartContainer for \"49b6b4b98cee783c13a36663c372a63b0d5fb6daabc0b1e37974b17602f8f09b\" returns successfully" Feb 13 19:21:47.784653 containerd[1445]: time="2025-02-13T19:21:47.784135722Z" level=info msg="StartContainer for \"615d0caa4a65a33560e800a282c8db3127b4fa4a0461b2198c9abd6dad0c6269\" returns successfully" Feb 13 19:21:47.803692 containerd[1445]: time="2025-02-13T19:21:47.803652144Z" level=info msg="StartContainer for \"7c51a39e72e1250eb8fb7fa3f4b73dd986a2ea9651034e71804411f57832431d\" returns successfully" Feb 13 19:21:47.820038 kubelet[2248]: W0213 19:21:47.819898 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.820038 kubelet[2248]: E0213 19:21:47.820012 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.840456 kubelet[2248]: I0213 19:21:47.840422 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:47.841202 kubelet[2248]: W0213 19:21:47.841055 2248 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.841202 kubelet[2248]: E0213 19:21:47.841122 2248 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Feb 13 19:21:47.841202 kubelet[2248]: E0213 19:21:47.841183 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Feb 13 19:21:48.353760 kubelet[2248]: E0213 19:21:48.353310 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:48.357381 kubelet[2248]: E0213 19:21:48.357354 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:48.358740 kubelet[2248]: E0213 19:21:48.358575 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:49.340510 kubelet[2248]: E0213 19:21:49.340463 2248 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:21:49.364962 kubelet[2248]: E0213 19:21:49.364876 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:49.442985 kubelet[2248]: I0213 19:21:49.442934 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:49.449239 kubelet[2248]: I0213 19:21:49.449198 2248 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:21:49.455690 kubelet[2248]: E0213 19:21:49.455638 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:49.556294 kubelet[2248]: E0213 19:21:49.556249 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:49.657676 kubelet[2248]: E0213 19:21:49.657241 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:49.757923 kubelet[2248]: E0213 19:21:49.757873 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:49.858645 kubelet[2248]: E0213 19:21:49.858600 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:50.329116 kubelet[2248]: I0213 19:21:50.328867 2248 apiserver.go:52] "Watching apiserver" Feb 13 19:21:50.335035 kubelet[2248]: I0213 19:21:50.334992 2248 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:21:50.377547 kubelet[2248]: E0213 19:21:50.377523 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:51.364059 kubelet[2248]: E0213 19:21:51.364013 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:51.516317 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-7.scope)... Feb 13 19:21:51.516334 systemd[1]: Reloading... Feb 13 19:21:51.576941 zram_generator::config[2569]: No configuration found. Feb 13 19:21:51.671195 kubelet[2248]: E0213 19:21:51.670623 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:51.685485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:51.749601 systemd[1]: Reloading finished in 232 ms. Feb 13 19:21:51.782357 kubelet[2248]: I0213 19:21:51.782295 2248 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:21:51.782451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:51.796857 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:21:51.797163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:51.797218 systemd[1]: kubelet.service: Consumed 2.112s CPU time, 115.9M memory peak, 0B memory swap peak. Feb 13 19:21:51.806165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:51.906103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:51.910434 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:21:51.951750 kubelet[2611]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:51.951750 kubelet[2611]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:21:51.951750 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:51.952125 kubelet[2611]: I0213 19:21:51.951780 2611 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:21:51.955966 kubelet[2611]: I0213 19:21:51.955827 2611 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:21:51.955966 kubelet[2611]: I0213 19:21:51.955853 2611 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:21:51.956093 kubelet[2611]: I0213 19:21:51.956027 2611 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:21:51.957347 kubelet[2611]: I0213 19:21:51.957322 2611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:21:51.959843 kubelet[2611]: I0213 19:21:51.959138 2611 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:21:51.964841 kubelet[2611]: I0213 19:21:51.964814 2611 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:21:51.965174 kubelet[2611]: I0213 19:21:51.965141 2611 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:21:51.965452 kubelet[2611]: I0213 19:21:51.965247 2611 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:21:51.965574 kubelet[2611]: I0213 19:21:51.965559 2611 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:21:51.965629 kubelet[2611]: I0213 19:21:51.965621 2611 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:21:51.965717 kubelet[2611]: I0213 19:21:51.965705 2611 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:51.965901 kubelet[2611]: I0213 19:21:51.965884 2611 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:21:51.966002 kubelet[2611]: I0213 19:21:51.965991 2611 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:21:51.966615 kubelet[2611]: I0213 19:21:51.966083 2611 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:21:51.966615 kubelet[2611]: I0213 19:21:51.966107 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:21:51.966793 kubelet[2611]: I0213 19:21:51.966759 2611 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:21:51.966961 kubelet[2611]: I0213 19:21:51.966947 2611 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.967329 2611 server.go:1264] "Started kubelet" Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.967651 2611 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.967864 2611 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.967896 2611 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.968247 2611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:21:51.969918 kubelet[2611]: I0213 19:21:51.968750 2611 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:21:51.972661 kubelet[2611]: E0213 19:21:51.972534 2611 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:21:51.972661 kubelet[2611]: I0213 19:21:51.972591 2611 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:21:51.972796 kubelet[2611]: I0213 19:21:51.972701 2611 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:21:51.972992 kubelet[2611]: I0213 19:21:51.972971 2611 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:21:51.979437 kubelet[2611]: I0213 19:21:51.979384 2611 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:21:51.979566 kubelet[2611]: I0213 19:21:51.979540 2611 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:21:51.981344 kubelet[2611]: E0213 19:21:51.981300 2611 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:21:51.982656 kubelet[2611]: I0213 19:21:51.982620 2611 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:21:51.992756 kubelet[2611]: I0213 19:21:51.992625 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:21:51.994649 kubelet[2611]: I0213 19:21:51.994172 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:21:51.994649 kubelet[2611]: I0213 19:21:51.994211 2611 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:21:51.994649 kubelet[2611]: I0213 19:21:51.994229 2611 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:21:51.994649 kubelet[2611]: E0213 19:21:51.994278 2611 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:21:52.023762 kubelet[2611]: I0213 19:21:52.023737 2611 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:21:52.023762 kubelet[2611]: I0213 19:21:52.023754 2611 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:21:52.023762 kubelet[2611]: I0213 19:21:52.023775 2611 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:52.023979 kubelet[2611]: I0213 19:21:52.023961 2611 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:21:52.024003 kubelet[2611]: I0213 19:21:52.023978 2611 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:21:52.024003 kubelet[2611]: I0213 19:21:52.023996 2611 policy_none.go:49] "None policy: Start" Feb 13 19:21:52.024592 kubelet[2611]: I0213 19:21:52.024573 2611 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:21:52.024592 kubelet[2611]: I0213 19:21:52.024596 2611 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:21:52.024723 kubelet[2611]: I0213 19:21:52.024708 2611 state_mem.go:75] "Updated machine memory state" Feb 13 19:21:52.029124 kubelet[2611]: I0213 19:21:52.029095 2611 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:21:52.029312 kubelet[2611]: I0213 19:21:52.029263 2611 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:21:52.029436 kubelet[2611]: I0213 19:21:52.029373 2611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:21:52.077645 kubelet[2611]: I0213 19:21:52.077616 2611 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:21:52.085108 kubelet[2611]: I0213 19:21:52.085062 2611 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:21:52.085924 kubelet[2611]: I0213 19:21:52.085162 2611 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:21:52.094553 kubelet[2611]: I0213 19:21:52.094427 2611 topology_manager.go:215] "Topology Admit Handler" podUID="288f1ad501c6fccda5ee26bbb290c085" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:21:52.094553 kubelet[2611]: I0213 19:21:52.094537 2611 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:21:52.094839 kubelet[2611]: I0213 19:21:52.094577 2611 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:21:52.100579 kubelet[2611]: E0213 19:21:52.100523 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:52.101101 kubelet[2611]: E0213 19:21:52.101060 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174310 kubelet[2611]: I0213 19:21:52.174264 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:52.174310 kubelet[2611]: I0213 19:21:52.174308 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174447 kubelet[2611]: I0213 19:21:52.174330 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174447 kubelet[2611]: I0213 19:21:52.174350 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174447 kubelet[2611]: I0213 19:21:52.174366 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:21:52.174447 kubelet[2611]: I0213 19:21:52.174384 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:52.174447 kubelet[2611]: I0213 19:21:52.174401 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174580 kubelet[2611]: I0213 19:21:52.174417 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:52.174580 kubelet[2611]: I0213 19:21:52.174436 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/288f1ad501c6fccda5ee26bbb290c085-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"288f1ad501c6fccda5ee26bbb290c085\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:52.403011 kubelet[2611]: E0213 19:21:52.402888 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:52.403142 kubelet[2611]: E0213 19:21:52.403107 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:52.403282 kubelet[2611]: E0213 19:21:52.403247 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:52.526676 sudo[2647]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:21:52.527003 sudo[2647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:21:52.950470 sudo[2647]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:52.966548 kubelet[2611]: I0213 19:21:52.966433 2611 apiserver.go:52] "Watching apiserver" Feb 13 19:21:52.977904 kubelet[2611]: I0213 19:21:52.977847 2611 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:21:53.028182 kubelet[2611]: E0213 19:21:53.027998 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:21:53.028182 kubelet[2611]: E0213 19:21:53.028015 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:21:53.028337 kubelet[2611]: E0213 19:21:53.028300 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:53.028509 kubelet[2611]: E0213 19:21:53.028443 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:53.031514 kubelet[2611]: E0213 19:21:53.031478 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:21:53.034077 kubelet[2611]: E0213 19:21:53.034045 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:53.037558 kubelet[2611]: I0213 19:21:53.037076 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.0370616 podStartE2EDuration="1.0370616s" podCreationTimestamp="2025-02-13 19:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:53.037054197 +0000 UTC m=+1.123697362" watchObservedRunningTime="2025-02-13 19:21:53.0370616 +0000 UTC m=+1.123704765" Feb 13 19:21:53.055493 kubelet[2611]: I0213 19:21:53.055213 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.055195149 podStartE2EDuration="3.055195149s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:53.046348262 +0000 UTC m=+1.132991427" watchObservedRunningTime="2025-02-13 19:21:53.055195149 +0000 UTC m=+1.141838314" Feb 13 19:21:53.055493 kubelet[2611]: I0213 19:21:53.055334 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.055329575 podStartE2EDuration="2.055329575s" podCreationTimestamp="2025-02-13 19:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:53.055152928 +0000 UTC m=+1.141796093" watchObservedRunningTime="2025-02-13 19:21:53.055329575 +0000 UTC m=+1.141972740" Feb 13 19:21:54.018339 kubelet[2611]: E0213 19:21:54.018295 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:54.018670 kubelet[2611]: E0213 19:21:54.018355 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:54.018747 kubelet[2611]: E0213 19:21:54.018716 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:55.021394 kubelet[2611]: E0213 19:21:55.020365 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:55.021394 kubelet[2611]: E0213 19:21:55.021289 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:21:55.576393 sudo[1620]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:55.577616 sshd[1619]: Connection closed by 10.0.0.1 port 59612 Feb 13 19:21:55.578058 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:55.580720 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:59612.service: Deactivated successfully. Feb 13 19:21:55.582362 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:21:55.582525 systemd[1]: session-7.scope: Consumed 7.394s CPU time, 193.2M memory peak, 0B memory swap peak. Feb 13 19:21:55.583605 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:21:55.584846 systemd-logind[1426]: Removed session 7. Feb 13 19:22:00.725341 kubelet[2611]: E0213 19:22:00.725299 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:01.029181 kubelet[2611]: E0213 19:22:01.029073 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:02.450784 kubelet[2611]: E0213 19:22:02.450754 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:05.014381 kubelet[2611]: E0213 19:22:05.014309 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:06.101841 update_engine[1432]: I20250213 19:22:06.101735 1432 update_attempter.cc:509] Updating boot flags... Feb 13 19:22:06.133937 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2695) Feb 13 19:22:06.164930 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2697) Feb 13 19:22:07.025322 kubelet[2611]: I0213 19:22:07.025291 2611 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:22:07.026372 kubelet[2611]: I0213 19:22:07.025778 2611 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:22:07.026434 containerd[1445]: time="2025-02-13T19:22:07.025582413Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:22:07.867695 kubelet[2611]: I0213 19:22:07.867655 2611 topology_manager.go:215] "Topology Admit Handler" podUID="751b705c-17c6-4b2c-b380-e1dd1c538ebe" podNamespace="kube-system" podName="kube-proxy-57jc2" Feb 13 19:22:07.871730 kubelet[2611]: I0213 19:22:07.871560 2611 topology_manager.go:215] "Topology Admit Handler" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" podNamespace="kube-system" podName="cilium-xp86b" Feb 13 19:22:07.879712 systemd[1]: Created slice kubepods-besteffort-pod751b705c_17c6_4b2c_b380_e1dd1c538ebe.slice - libcontainer container kubepods-besteffort-pod751b705c_17c6_4b2c_b380_e1dd1c538ebe.slice. Feb 13 19:22:07.892085 kubelet[2611]: I0213 19:22:07.892003 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-cgroup\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892085 kubelet[2611]: I0213 19:22:07.892046 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-hostproc\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892085 kubelet[2611]: I0213 19:22:07.892066 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9l8\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-kube-api-access-vj9l8\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892085 kubelet[2611]: I0213 19:22:07.892085 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afe3804c-277a-4f14-ab0c-6e903c6ef560-clustermesh-secrets\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892101 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-kernel\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892118 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-run\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892135 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-lib-modules\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892151 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/751b705c-17c6-4b2c-b380-e1dd1c538ebe-lib-modules\") pod \"kube-proxy-57jc2\" (UID: \"751b705c-17c6-4b2c-b380-e1dd1c538ebe\") " pod="kube-system/kube-proxy-57jc2" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892168 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2twh\" (UniqueName: \"kubernetes.io/projected/751b705c-17c6-4b2c-b380-e1dd1c538ebe-kube-api-access-x2twh\") pod \"kube-proxy-57jc2\" (UID: \"751b705c-17c6-4b2c-b380-e1dd1c538ebe\") " pod="kube-system/kube-proxy-57jc2" Feb 13 19:22:07.892278 kubelet[2611]: I0213 19:22:07.892184 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cni-path\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892199 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/751b705c-17c6-4b2c-b380-e1dd1c538ebe-kube-proxy\") pod \"kube-proxy-57jc2\" (UID: \"751b705c-17c6-4b2c-b380-e1dd1c538ebe\") " pod="kube-system/kube-proxy-57jc2" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892213 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/751b705c-17c6-4b2c-b380-e1dd1c538ebe-xtables-lock\") pod \"kube-proxy-57jc2\" (UID: \"751b705c-17c6-4b2c-b380-e1dd1c538ebe\") " pod="kube-system/kube-proxy-57jc2" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892228 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-bpf-maps\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892242 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-etc-cni-netd\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892257 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-hubble-tls\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892403 kubelet[2611]: I0213 19:22:07.892286 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-xtables-lock\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892523 kubelet[2611]: I0213 19:22:07.892306 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-config-path\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.892523 kubelet[2611]: I0213 19:22:07.892321 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-net\") pod \"cilium-xp86b\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " pod="kube-system/cilium-xp86b" Feb 13 19:22:07.900258 systemd[1]: Created slice kubepods-burstable-podafe3804c_277a_4f14_ab0c_6e903c6ef560.slice - libcontainer container kubepods-burstable-podafe3804c_277a_4f14_ab0c_6e903c6ef560.slice. Feb 13 19:22:08.125616 kubelet[2611]: I0213 19:22:08.125473 2611 topology_manager.go:215] "Topology Admit Handler" podUID="69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" podNamespace="kube-system" podName="cilium-operator-599987898-zszwn" Feb 13 19:22:08.132707 systemd[1]: Created slice kubepods-besteffort-pod69f79891_4c91_49ca_bc3b_9cab4e2fd9ce.slice - libcontainer container kubepods-besteffort-pod69f79891_4c91_49ca_bc3b_9cab4e2fd9ce.slice. Feb 13 19:22:08.194523 kubelet[2611]: E0213 19:22:08.194472 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:08.194849 kubelet[2611]: I0213 19:22:08.194474 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkdb5\" (UniqueName: \"kubernetes.io/projected/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-kube-api-access-hkdb5\") pod \"cilium-operator-599987898-zszwn\" (UID: \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\") " pod="kube-system/cilium-operator-599987898-zszwn" Feb 13 19:22:08.194849 kubelet[2611]: I0213 19:22:08.194766 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-cilium-config-path\") pod \"cilium-operator-599987898-zszwn\" (UID: \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\") " pod="kube-system/cilium-operator-599987898-zszwn" Feb 13 19:22:08.201251 containerd[1445]: time="2025-02-13T19:22:08.201162344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57jc2,Uid:751b705c-17c6-4b2c-b380-e1dd1c538ebe,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:08.204098 kubelet[2611]: E0213 19:22:08.204071 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:08.204540 containerd[1445]: time="2025-02-13T19:22:08.204500557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xp86b,Uid:afe3804c-277a-4f14-ab0c-6e903c6ef560,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:08.233151 containerd[1445]: time="2025-02-13T19:22:08.233058065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:08.233369 containerd[1445]: time="2025-02-13T19:22:08.233124640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:08.233369 containerd[1445]: time="2025-02-13T19:22:08.233144244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.233440 containerd[1445]: time="2025-02-13T19:22:08.233237145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.234111 containerd[1445]: time="2025-02-13T19:22:08.234047723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:08.234170 containerd[1445]: time="2025-02-13T19:22:08.234103055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:08.234170 containerd[1445]: time="2025-02-13T19:22:08.234118538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.234239 containerd[1445]: time="2025-02-13T19:22:08.234191674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.251078 systemd[1]: Started cri-containerd-cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313.scope - libcontainer container cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313. Feb 13 19:22:08.252616 systemd[1]: Started cri-containerd-f498feb15ea97a5022bb86c14bad4edcb979124612063a6555a7b0196a3a0a41.scope - libcontainer container f498feb15ea97a5022bb86c14bad4edcb979124612063a6555a7b0196a3a0a41. Feb 13 19:22:08.277509 containerd[1445]: time="2025-02-13T19:22:08.277468855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xp86b,Uid:afe3804c-277a-4f14-ab0c-6e903c6ef560,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\"" Feb 13 19:22:08.279426 containerd[1445]: time="2025-02-13T19:22:08.278969224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57jc2,Uid:751b705c-17c6-4b2c-b380-e1dd1c538ebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f498feb15ea97a5022bb86c14bad4edcb979124612063a6555a7b0196a3a0a41\"" Feb 13 19:22:08.280598 kubelet[2611]: E0213 19:22:08.280563 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:08.281226 kubelet[2611]: E0213 19:22:08.281000 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:08.285826 containerd[1445]: time="2025-02-13T19:22:08.285794042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:22:08.292319 containerd[1445]: time="2025-02-13T19:22:08.290493314Z" level=info msg="CreateContainer within sandbox \"f498feb15ea97a5022bb86c14bad4edcb979124612063a6555a7b0196a3a0a41\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:22:08.306324 containerd[1445]: time="2025-02-13T19:22:08.306243451Z" level=info msg="CreateContainer within sandbox \"f498feb15ea97a5022bb86c14bad4edcb979124612063a6555a7b0196a3a0a41\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"247b61749df1c35ce174077ee416999a8a03648a34c12fc0532a5710a1dfdcf8\"" Feb 13 19:22:08.310573 containerd[1445]: time="2025-02-13T19:22:08.310540314Z" level=info msg="StartContainer for \"247b61749df1c35ce174077ee416999a8a03648a34c12fc0532a5710a1dfdcf8\"" Feb 13 19:22:08.336117 systemd[1]: Started cri-containerd-247b61749df1c35ce174077ee416999a8a03648a34c12fc0532a5710a1dfdcf8.scope - libcontainer container 247b61749df1c35ce174077ee416999a8a03648a34c12fc0532a5710a1dfdcf8. Feb 13 19:22:08.362817 containerd[1445]: time="2025-02-13T19:22:08.362767299Z" level=info msg="StartContainer for \"247b61749df1c35ce174077ee416999a8a03648a34c12fc0532a5710a1dfdcf8\" returns successfully" Feb 13 19:22:08.436747 kubelet[2611]: E0213 19:22:08.436700 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:08.438274 containerd[1445]: time="2025-02-13T19:22:08.438223783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zszwn,Uid:69f79891-4c91-49ca-bc3b-9cab4e2fd9ce,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:08.468799 containerd[1445]: time="2025-02-13T19:22:08.468309628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:08.468799 containerd[1445]: time="2025-02-13T19:22:08.468748884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:08.468799 containerd[1445]: time="2025-02-13T19:22:08.468765128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.471093 containerd[1445]: time="2025-02-13T19:22:08.469135809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:08.506782 systemd[1]: Started cri-containerd-4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4.scope - libcontainer container 4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4. Feb 13 19:22:08.538101 containerd[1445]: time="2025-02-13T19:22:08.538056859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zszwn,Uid:69f79891-4c91-49ca-bc3b-9cab4e2fd9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\"" Feb 13 19:22:08.538880 kubelet[2611]: E0213 19:22:08.538851 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:09.042289 kubelet[2611]: E0213 19:22:09.042261 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:09.051237 kubelet[2611]: I0213 19:22:09.051186 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57jc2" podStartSLOduration=2.051172779 podStartE2EDuration="2.051172779s" podCreationTimestamp="2025-02-13 19:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:22:09.050925047 +0000 UTC m=+17.137568212" watchObservedRunningTime="2025-02-13 19:22:09.051172779 +0000 UTC m=+17.137815944" Feb 13 19:22:13.946856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756563417.mount: Deactivated successfully. Feb 13 19:22:15.151400 containerd[1445]: time="2025-02-13T19:22:15.151349146Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:15.151926 containerd[1445]: time="2025-02-13T19:22:15.151795977Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:22:15.152770 containerd[1445]: time="2025-02-13T19:22:15.152746249Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:15.155281 containerd[1445]: time="2025-02-13T19:22:15.155254849Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.869418758s" Feb 13 19:22:15.155359 containerd[1445]: time="2025-02-13T19:22:15.155285494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:22:15.157775 containerd[1445]: time="2025-02-13T19:22:15.157749287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:22:15.159261 containerd[1445]: time="2025-02-13T19:22:15.159230563Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:22:15.214309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759230129.mount: Deactivated successfully. Feb 13 19:22:15.216892 containerd[1445]: time="2025-02-13T19:22:15.216774421Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\"" Feb 13 19:22:15.217524 containerd[1445]: time="2025-02-13T19:22:15.217497376Z" level=info msg="StartContainer for \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\"" Feb 13 19:22:15.243151 systemd[1]: Started cri-containerd-d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6.scope - libcontainer container d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6. Feb 13 19:22:15.263877 containerd[1445]: time="2025-02-13T19:22:15.263839448Z" level=info msg="StartContainer for \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\" returns successfully" Feb 13 19:22:15.309068 systemd[1]: cri-containerd-d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6.scope: Deactivated successfully. Feb 13 19:22:15.451330 containerd[1445]: time="2025-02-13T19:22:15.445551949Z" level=info msg="shim disconnected" id=d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6 namespace=k8s.io Feb 13 19:22:15.451330 containerd[1445]: time="2025-02-13T19:22:15.451325030Z" level=warning msg="cleaning up after shim disconnected" id=d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6 namespace=k8s.io Feb 13 19:22:15.451539 containerd[1445]: time="2025-02-13T19:22:15.451342473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:16.054510 kubelet[2611]: E0213 19:22:16.054451 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:16.057022 containerd[1445]: time="2025-02-13T19:22:16.056982700Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:22:16.070561 containerd[1445]: time="2025-02-13T19:22:16.070501447Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\"" Feb 13 19:22:16.071121 containerd[1445]: time="2025-02-13T19:22:16.070965518Z" level=info msg="StartContainer for \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\"" Feb 13 19:22:16.097118 systemd[1]: Started cri-containerd-e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4.scope - libcontainer container e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4. Feb 13 19:22:16.120488 containerd[1445]: time="2025-02-13T19:22:16.120443444Z" level=info msg="StartContainer for \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\" returns successfully" Feb 13 19:22:16.134486 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:22:16.134715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:16.134794 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:16.143003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:16.143245 systemd[1]: cri-containerd-e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4.scope: Deactivated successfully. Feb 13 19:22:16.183558 containerd[1445]: time="2025-02-13T19:22:16.183467482Z" level=info msg="shim disconnected" id=e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4 namespace=k8s.io Feb 13 19:22:16.183558 containerd[1445]: time="2025-02-13T19:22:16.183520890Z" level=warning msg="cleaning up after shim disconnected" id=e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4 namespace=k8s.io Feb 13 19:22:16.183558 containerd[1445]: time="2025-02-13T19:22:16.183530892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:16.185615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:16.204683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6-rootfs.mount: Deactivated successfully. Feb 13 19:22:17.058035 kubelet[2611]: E0213 19:22:17.057533 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:17.060364 containerd[1445]: time="2025-02-13T19:22:17.060309490Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:22:17.078470 containerd[1445]: time="2025-02-13T19:22:17.078380503Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\"" Feb 13 19:22:17.079444 containerd[1445]: time="2025-02-13T19:22:17.079407053Z" level=info msg="StartContainer for \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\"" Feb 13 19:22:17.113121 systemd[1]: Started cri-containerd-346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33.scope - libcontainer container 346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33. Feb 13 19:22:17.140032 containerd[1445]: time="2025-02-13T19:22:17.139970702Z" level=info msg="StartContainer for \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\" returns successfully" Feb 13 19:22:17.166777 systemd[1]: cri-containerd-346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33.scope: Deactivated successfully. Feb 13 19:22:17.188540 containerd[1445]: time="2025-02-13T19:22:17.188484382Z" level=info msg="shim disconnected" id=346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33 namespace=k8s.io Feb 13 19:22:17.189141 containerd[1445]: time="2025-02-13T19:22:17.188957371Z" level=warning msg="cleaning up after shim disconnected" id=346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33 namespace=k8s.io Feb 13 19:22:17.189141 containerd[1445]: time="2025-02-13T19:22:17.188979695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:17.204228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33-rootfs.mount: Deactivated successfully. Feb 13 19:22:18.061447 kubelet[2611]: E0213 19:22:18.061416 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:18.064517 containerd[1445]: time="2025-02-13T19:22:18.064475819Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:22:18.125422 containerd[1445]: time="2025-02-13T19:22:18.125288032Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\"" Feb 13 19:22:18.125845 containerd[1445]: time="2025-02-13T19:22:18.125822988Z" level=info msg="StartContainer for \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\"" Feb 13 19:22:18.154085 systemd[1]: Started cri-containerd-dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff.scope - libcontainer container dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff. Feb 13 19:22:18.174742 systemd[1]: cri-containerd-dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff.scope: Deactivated successfully. Feb 13 19:22:18.175882 containerd[1445]: time="2025-02-13T19:22:18.175830438Z" level=info msg="StartContainer for \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\" returns successfully" Feb 13 19:22:18.198561 containerd[1445]: time="2025-02-13T19:22:18.198494794Z" level=info msg="shim disconnected" id=dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff namespace=k8s.io Feb 13 19:22:18.198561 containerd[1445]: time="2025-02-13T19:22:18.198547361Z" level=warning msg="cleaning up after shim disconnected" id=dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff namespace=k8s.io Feb 13 19:22:18.198561 containerd[1445]: time="2025-02-13T19:22:18.198555562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:18.204980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff-rootfs.mount: Deactivated successfully. Feb 13 19:22:18.339316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243538464.mount: Deactivated successfully. Feb 13 19:22:18.564827 containerd[1445]: time="2025-02-13T19:22:18.564776916Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:18.565201 containerd[1445]: time="2025-02-13T19:22:18.565155889Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:22:18.566002 containerd[1445]: time="2025-02-13T19:22:18.565971524Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:18.567542 containerd[1445]: time="2025-02-13T19:22:18.567422889Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.409639357s" Feb 13 19:22:18.567542 containerd[1445]: time="2025-02-13T19:22:18.567457054Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:22:18.569690 containerd[1445]: time="2025-02-13T19:22:18.569655164Z" level=info msg="CreateContainer within sandbox \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:22:18.583760 containerd[1445]: time="2025-02-13T19:22:18.583718426Z" level=info msg="CreateContainer within sandbox \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\"" Feb 13 19:22:18.584286 containerd[1445]: time="2025-02-13T19:22:18.584264903Z" level=info msg="StartContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\"" Feb 13 19:22:18.606084 systemd[1]: Started cri-containerd-34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5.scope - libcontainer container 34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5. Feb 13 19:22:18.627211 containerd[1445]: time="2025-02-13T19:22:18.627067818Z" level=info msg="StartContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" returns successfully" Feb 13 19:22:19.069983 kubelet[2611]: E0213 19:22:19.069947 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:19.074701 kubelet[2611]: E0213 19:22:19.074656 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:19.078628 containerd[1445]: time="2025-02-13T19:22:19.078586617Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:22:19.109369 kubelet[2611]: I0213 19:22:19.109295 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zszwn" podStartSLOduration=1.080568791 podStartE2EDuration="11.109279298s" podCreationTimestamp="2025-02-13 19:22:08 +0000 UTC" firstStartedPulling="2025-02-13 19:22:08.53947641 +0000 UTC m=+16.626119575" lastFinishedPulling="2025-02-13 19:22:18.568186917 +0000 UTC m=+26.654830082" observedRunningTime="2025-02-13 19:22:19.087795706 +0000 UTC m=+27.174438871" watchObservedRunningTime="2025-02-13 19:22:19.109279298 +0000 UTC m=+27.195922463" Feb 13 19:22:19.121550 containerd[1445]: time="2025-02-13T19:22:19.121495434Z" level=info msg="CreateContainer within sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\"" Feb 13 19:22:19.123143 containerd[1445]: time="2025-02-13T19:22:19.121801676Z" level=info msg="StartContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\"" Feb 13 19:22:19.148083 systemd[1]: Started cri-containerd-fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b.scope - libcontainer container fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b. Feb 13 19:22:19.185166 containerd[1445]: time="2025-02-13T19:22:19.184898110Z" level=info msg="StartContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" returns successfully" Feb 13 19:22:19.312421 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:41358.service - OpenSSH per-connection server daemon (10.0.0.1:41358). Feb 13 19:22:19.328622 kubelet[2611]: I0213 19:22:19.328526 2611 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:22:19.357897 kubelet[2611]: I0213 19:22:19.356979 2611 topology_manager.go:215] "Topology Admit Handler" podUID="514bcc56-b433-4f01-a2a7-e99a4a573d7b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5jlqk" Feb 13 19:22:19.357897 kubelet[2611]: I0213 19:22:19.357570 2611 topology_manager.go:215] "Topology Admit Handler" podUID="060246b6-7138-46ee-8312-3f326fad9b31" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b64lk" Feb 13 19:22:19.365827 systemd[1]: Created slice kubepods-burstable-pod514bcc56_b433_4f01_a2a7_e99a4a573d7b.slice - libcontainer container kubepods-burstable-pod514bcc56_b433_4f01_a2a7_e99a4a573d7b.slice. Feb 13 19:22:19.372588 systemd[1]: Created slice kubepods-burstable-pod060246b6_7138_46ee_8312_3f326fad9b31.slice - libcontainer container kubepods-burstable-pod060246b6_7138_46ee_8312_3f326fad9b31.slice. Feb 13 19:22:19.397324 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 41358 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:19.401256 sshd-session[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:19.408982 systemd-logind[1426]: New session 8 of user core. Feb 13 19:22:19.414383 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:22:19.485339 kubelet[2611]: I0213 19:22:19.484003 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060246b6-7138-46ee-8312-3f326fad9b31-config-volume\") pod \"coredns-7db6d8ff4d-b64lk\" (UID: \"060246b6-7138-46ee-8312-3f326fad9b31\") " pod="kube-system/coredns-7db6d8ff4d-b64lk" Feb 13 19:22:19.485339 kubelet[2611]: I0213 19:22:19.484049 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrk5\" (UniqueName: \"kubernetes.io/projected/060246b6-7138-46ee-8312-3f326fad9b31-kube-api-access-vtrk5\") pod \"coredns-7db6d8ff4d-b64lk\" (UID: \"060246b6-7138-46ee-8312-3f326fad9b31\") " pod="kube-system/coredns-7db6d8ff4d-b64lk" Feb 13 19:22:19.485339 kubelet[2611]: I0213 19:22:19.484071 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48s8h\" (UniqueName: \"kubernetes.io/projected/514bcc56-b433-4f01-a2a7-e99a4a573d7b-kube-api-access-48s8h\") pod \"coredns-7db6d8ff4d-5jlqk\" (UID: \"514bcc56-b433-4f01-a2a7-e99a4a573d7b\") " pod="kube-system/coredns-7db6d8ff4d-5jlqk" Feb 13 19:22:19.485339 kubelet[2611]: I0213 19:22:19.484089 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/514bcc56-b433-4f01-a2a7-e99a4a573d7b-config-volume\") pod \"coredns-7db6d8ff4d-5jlqk\" (UID: \"514bcc56-b433-4f01-a2a7-e99a4a573d7b\") " pod="kube-system/coredns-7db6d8ff4d-5jlqk" Feb 13 19:22:19.608141 sshd[3384]: Connection closed by 10.0.0.1 port 41358 Feb 13 19:22:19.608496 sshd-session[3377]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:19.611838 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:41358.service: Deactivated successfully. Feb 13 19:22:19.615720 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:22:19.617764 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:22:19.619000 systemd-logind[1426]: Removed session 8. Feb 13 19:22:19.671432 kubelet[2611]: E0213 19:22:19.671395 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:19.672433 containerd[1445]: time="2025-02-13T19:22:19.672393683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jlqk,Uid:514bcc56-b433-4f01-a2a7-e99a4a573d7b,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:19.675155 kubelet[2611]: E0213 19:22:19.675125 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:19.675631 containerd[1445]: time="2025-02-13T19:22:19.675595437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b64lk,Uid:060246b6-7138-46ee-8312-3f326fad9b31,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:20.078666 kubelet[2611]: E0213 19:22:20.078298 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:20.079786 kubelet[2611]: E0213 19:22:20.078642 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:20.093686 kubelet[2611]: I0213 19:22:20.093618 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xp86b" podStartSLOduration=6.221293387 podStartE2EDuration="13.093599798s" podCreationTimestamp="2025-02-13 19:22:07 +0000 UTC" firstStartedPulling="2025-02-13 19:22:08.285298173 +0000 UTC m=+16.371941338" lastFinishedPulling="2025-02-13 19:22:15.157604584 +0000 UTC m=+23.244247749" observedRunningTime="2025-02-13 19:22:20.09338945 +0000 UTC m=+28.180032615" watchObservedRunningTime="2025-02-13 19:22:20.093599798 +0000 UTC m=+28.180242963" Feb 13 19:22:21.085211 kubelet[2611]: E0213 19:22:21.085176 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:22.087153 kubelet[2611]: E0213 19:22:22.087119 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:22.307846 systemd-networkd[1373]: cilium_host: Link UP Feb 13 19:22:22.308035 systemd-networkd[1373]: cilium_net: Link UP Feb 13 19:22:22.308038 systemd-networkd[1373]: cilium_net: Gained carrier Feb 13 19:22:22.308202 systemd-networkd[1373]: cilium_host: Gained carrier Feb 13 19:22:22.317446 systemd-networkd[1373]: cilium_host: Gained IPv6LL Feb 13 19:22:22.397679 systemd-networkd[1373]: cilium_vxlan: Link UP Feb 13 19:22:22.397685 systemd-networkd[1373]: cilium_vxlan: Gained carrier Feb 13 19:22:22.697951 kernel: NET: Registered PF_ALG protocol family Feb 13 19:22:23.015150 systemd-networkd[1373]: cilium_net: Gained IPv6LL Feb 13 19:22:23.246130 systemd-networkd[1373]: lxc_health: Link UP Feb 13 19:22:23.255403 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 19:22:23.810708 systemd-networkd[1373]: lxc502ca9579b6b: Link UP Feb 13 19:22:23.821394 systemd-networkd[1373]: lxc7a6b59dbbca1: Link UP Feb 13 19:22:23.831805 kernel: eth0: renamed from tmpb74c8 Feb 13 19:22:23.839946 kernel: eth0: renamed from tmpe0fc1 Feb 13 19:22:23.844246 systemd-networkd[1373]: lxc502ca9579b6b: Gained carrier Feb 13 19:22:23.844736 systemd-networkd[1373]: lxc7a6b59dbbca1: Gained carrier Feb 13 19:22:24.163128 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Feb 13 19:22:24.211257 kubelet[2611]: E0213 19:22:24.211210 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:24.624764 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:53998.service - OpenSSH per-connection server daemon (10.0.0.1:53998). Feb 13 19:22:24.670487 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 53998 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:24.671982 sshd-session[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:24.677594 systemd-logind[1426]: New session 9 of user core. Feb 13 19:22:24.689366 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:22:24.813975 sshd[3854]: Connection closed by 10.0.0.1 port 53998 Feb 13 19:22:24.814348 sshd-session[3852]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:24.817746 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:53998.service: Deactivated successfully. Feb 13 19:22:24.819494 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:22:24.821207 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:22:24.822419 systemd-logind[1426]: Removed session 9. Feb 13 19:22:24.995069 systemd-networkd[1373]: lxc502ca9579b6b: Gained IPv6LL Feb 13 19:22:25.315069 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 19:22:25.892109 systemd-networkd[1373]: lxc7a6b59dbbca1: Gained IPv6LL Feb 13 19:22:27.518933 containerd[1445]: time="2025-02-13T19:22:27.518644381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:27.518933 containerd[1445]: time="2025-02-13T19:22:27.518692946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:27.518933 containerd[1445]: time="2025-02-13T19:22:27.518703907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:27.518933 containerd[1445]: time="2025-02-13T19:22:27.518822319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:27.519799 containerd[1445]: time="2025-02-13T19:22:27.519633523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:27.519799 containerd[1445]: time="2025-02-13T19:22:27.519717011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:27.519799 containerd[1445]: time="2025-02-13T19:22:27.519728332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:27.519954 containerd[1445]: time="2025-02-13T19:22:27.519821022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:27.541119 systemd[1]: Started cri-containerd-e0fc1c205e8911c7231622c9f74c0f7fffbe943443f901e354925eeb3eb3d354.scope - libcontainer container e0fc1c205e8911c7231622c9f74c0f7fffbe943443f901e354925eeb3eb3d354. Feb 13 19:22:27.545114 systemd[1]: Started cri-containerd-b74c8001885837418d6787cf76721e7ce10bc14d3127dc64bb6a031764effe5f.scope - libcontainer container b74c8001885837418d6787cf76721e7ce10bc14d3127dc64bb6a031764effe5f. Feb 13 19:22:27.554019 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:22:27.558470 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:22:27.575266 containerd[1445]: time="2025-02-13T19:22:27.574016555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b64lk,Uid:060246b6-7138-46ee-8312-3f326fad9b31,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0fc1c205e8911c7231622c9f74c0f7fffbe943443f901e354925eeb3eb3d354\"" Feb 13 19:22:27.576003 kubelet[2611]: E0213 19:22:27.575858 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:27.584497 containerd[1445]: time="2025-02-13T19:22:27.580601352Z" level=info msg="CreateContainer within sandbox \"e0fc1c205e8911c7231622c9f74c0f7fffbe943443f901e354925eeb3eb3d354\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:22:27.586072 containerd[1445]: time="2025-02-13T19:22:27.586020389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jlqk,Uid:514bcc56-b433-4f01-a2a7-e99a4a573d7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b74c8001885837418d6787cf76721e7ce10bc14d3127dc64bb6a031764effe5f\"" Feb 13 19:22:27.586837 kubelet[2611]: E0213 19:22:27.586746 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:27.593089 containerd[1445]: time="2025-02-13T19:22:27.589520189Z" level=info msg="CreateContainer within sandbox \"b74c8001885837418d6787cf76721e7ce10bc14d3127dc64bb6a031764effe5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:22:27.594331 containerd[1445]: time="2025-02-13T19:22:27.594279678Z" level=info msg="CreateContainer within sandbox \"e0fc1c205e8911c7231622c9f74c0f7fffbe943443f901e354925eeb3eb3d354\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ada8921a06ee25106dc3741725e94a6f84cea4682f41fff716d2efd227a500b\"" Feb 13 19:22:27.596414 containerd[1445]: time="2025-02-13T19:22:27.595111964Z" level=info msg="StartContainer for \"2ada8921a06ee25106dc3741725e94a6f84cea4682f41fff716d2efd227a500b\"" Feb 13 19:22:27.603889 containerd[1445]: time="2025-02-13T19:22:27.603850902Z" level=info msg="CreateContainer within sandbox \"b74c8001885837418d6787cf76721e7ce10bc14d3127dc64bb6a031764effe5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f71621578c341677548c4461d636a3e76d35a61a133e8807db6dd183c4618aa\"" Feb 13 19:22:27.604532 containerd[1445]: time="2025-02-13T19:22:27.604391638Z" level=info msg="StartContainer for \"3f71621578c341677548c4461d636a3e76d35a61a133e8807db6dd183c4618aa\"" Feb 13 19:22:27.624058 systemd[1]: Started cri-containerd-2ada8921a06ee25106dc3741725e94a6f84cea4682f41fff716d2efd227a500b.scope - libcontainer container 2ada8921a06ee25106dc3741725e94a6f84cea4682f41fff716d2efd227a500b. Feb 13 19:22:27.626516 systemd[1]: Started cri-containerd-3f71621578c341677548c4461d636a3e76d35a61a133e8807db6dd183c4618aa.scope - libcontainer container 3f71621578c341677548c4461d636a3e76d35a61a133e8807db6dd183c4618aa. Feb 13 19:22:27.651437 containerd[1445]: time="2025-02-13T19:22:27.651210652Z" level=info msg="StartContainer for \"2ada8921a06ee25106dc3741725e94a6f84cea4682f41fff716d2efd227a500b\" returns successfully" Feb 13 19:22:27.654598 containerd[1445]: time="2025-02-13T19:22:27.654500391Z" level=info msg="StartContainer for \"3f71621578c341677548c4461d636a3e76d35a61a133e8807db6dd183c4618aa\" returns successfully" Feb 13 19:22:28.102586 kubelet[2611]: E0213 19:22:28.102550 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:28.105305 kubelet[2611]: E0213 19:22:28.105277 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:28.113114 kubelet[2611]: I0213 19:22:28.113052 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5jlqk" podStartSLOduration=20.113037084 podStartE2EDuration="20.113037084s" podCreationTimestamp="2025-02-13 19:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:22:28.112714172 +0000 UTC m=+36.199357337" watchObservedRunningTime="2025-02-13 19:22:28.113037084 +0000 UTC m=+36.199680249" Feb 13 19:22:28.523948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739610205.mount: Deactivated successfully. Feb 13 19:22:29.106978 kubelet[2611]: E0213 19:22:29.106948 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:29.108026 kubelet[2611]: E0213 19:22:29.107008 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:29.825520 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:54006.service - OpenSSH per-connection server daemon (10.0.0.1:54006). Feb 13 19:22:29.876530 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 54006 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:29.878859 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:29.884364 systemd-logind[1426]: New session 10 of user core. Feb 13 19:22:29.893139 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:22:30.007946 sshd[4045]: Connection closed by 10.0.0.1 port 54006 Feb 13 19:22:30.008475 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:30.011719 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:22:30.012049 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:54006.service: Deactivated successfully. Feb 13 19:22:30.013609 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:22:30.014515 systemd-logind[1426]: Removed session 10. Feb 13 19:22:30.109096 kubelet[2611]: E0213 19:22:30.108986 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:30.368123 kubelet[2611]: I0213 19:22:30.367856 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:22:30.368691 kubelet[2611]: E0213 19:22:30.368645 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:30.382851 kubelet[2611]: I0213 19:22:30.382667 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b64lk" podStartSLOduration=22.382652326 podStartE2EDuration="22.382652326s" podCreationTimestamp="2025-02-13 19:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:22:28.135771153 +0000 UTC m=+36.222414318" watchObservedRunningTime="2025-02-13 19:22:30.382652326 +0000 UTC m=+38.469295491" Feb 13 19:22:31.112033 kubelet[2611]: E0213 19:22:31.111595 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:35.023548 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:46414.service - OpenSSH per-connection server daemon (10.0.0.1:46414). Feb 13 19:22:35.065387 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 46414 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:35.066862 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:35.070536 systemd-logind[1426]: New session 11 of user core. Feb 13 19:22:35.090159 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:22:35.214111 sshd[4061]: Connection closed by 10.0.0.1 port 46414 Feb 13 19:22:35.214558 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:35.222484 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:46414.service: Deactivated successfully. Feb 13 19:22:35.225116 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:22:35.226436 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:22:35.235218 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:46426.service - OpenSSH per-connection server daemon (10.0.0.1:46426). Feb 13 19:22:35.236724 systemd-logind[1426]: Removed session 11. Feb 13 19:22:35.273856 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 46426 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:35.275427 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:35.279502 systemd-logind[1426]: New session 12 of user core. Feb 13 19:22:35.297094 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:22:35.446665 sshd[4076]: Connection closed by 10.0.0.1 port 46426 Feb 13 19:22:35.448404 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:35.456157 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:46426.service: Deactivated successfully. Feb 13 19:22:35.458196 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:22:35.462093 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:22:35.477288 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:46434.service - OpenSSH per-connection server daemon (10.0.0.1:46434). Feb 13 19:22:35.477772 systemd-logind[1426]: Removed session 12. Feb 13 19:22:35.518793 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 46434 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:35.520311 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:35.523993 systemd-logind[1426]: New session 13 of user core. Feb 13 19:22:35.532142 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:22:35.645515 sshd[4088]: Connection closed by 10.0.0.1 port 46434 Feb 13 19:22:35.645897 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:35.648444 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:46434.service: Deactivated successfully. Feb 13 19:22:35.650148 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:22:35.651449 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:22:35.652606 systemd-logind[1426]: Removed session 13. Feb 13 19:22:40.657828 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:46438.service - OpenSSH per-connection server daemon (10.0.0.1:46438). Feb 13 19:22:40.700973 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 46438 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:40.702260 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:40.706775 systemd-logind[1426]: New session 14 of user core. Feb 13 19:22:40.715083 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:22:40.842666 sshd[4108]: Connection closed by 10.0.0.1 port 46438 Feb 13 19:22:40.843028 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:40.845838 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:46438.service: Deactivated successfully. Feb 13 19:22:40.847585 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:22:40.848892 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:22:40.850533 systemd-logind[1426]: Removed session 14. Feb 13 19:22:45.870168 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:43932.service - OpenSSH per-connection server daemon (10.0.0.1:43932). Feb 13 19:22:45.913840 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 43932 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:45.915110 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:45.918705 systemd-logind[1426]: New session 15 of user core. Feb 13 19:22:45.928157 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:22:46.045792 sshd[4122]: Connection closed by 10.0.0.1 port 43932 Feb 13 19:22:46.046394 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:46.056578 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:43932.service: Deactivated successfully. Feb 13 19:22:46.059151 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:22:46.060684 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:22:46.074389 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:43938.service - OpenSSH per-connection server daemon (10.0.0.1:43938). Feb 13 19:22:46.075931 systemd-logind[1426]: Removed session 15. Feb 13 19:22:46.113946 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 43938 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:46.114903 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:46.119075 systemd-logind[1426]: New session 16 of user core. Feb 13 19:22:46.133142 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:22:46.343926 sshd[4136]: Connection closed by 10.0.0.1 port 43938 Feb 13 19:22:46.344619 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:46.357472 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:43938.service: Deactivated successfully. Feb 13 19:22:46.360141 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:22:46.361734 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:22:46.374235 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:43944.service - OpenSSH per-connection server daemon (10.0.0.1:43944). Feb 13 19:22:46.378304 systemd-logind[1426]: Removed session 16. Feb 13 19:22:46.413551 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:46.416348 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:46.422452 systemd-logind[1426]: New session 17 of user core. Feb 13 19:22:46.433068 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:22:47.700726 sshd[4148]: Connection closed by 10.0.0.1 port 43944 Feb 13 19:22:47.701479 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:47.713272 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:43944.service: Deactivated successfully. Feb 13 19:22:47.715613 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:22:47.718859 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:22:47.725366 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:43956.service - OpenSSH per-connection server daemon (10.0.0.1:43956). Feb 13 19:22:47.726541 systemd-logind[1426]: Removed session 17. Feb 13 19:22:47.773977 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 43956 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:47.775069 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:47.778656 systemd-logind[1426]: New session 18 of user core. Feb 13 19:22:47.790133 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:22:48.031330 sshd[4172]: Connection closed by 10.0.0.1 port 43956 Feb 13 19:22:48.033120 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:48.041468 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:43956.service: Deactivated successfully. Feb 13 19:22:48.043263 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:22:48.045195 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:22:48.054169 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:43968.service - OpenSSH per-connection server daemon (10.0.0.1:43968). Feb 13 19:22:48.057192 systemd-logind[1426]: Removed session 18. Feb 13 19:22:48.092805 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 43968 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:48.093961 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:48.097718 systemd-logind[1426]: New session 19 of user core. Feb 13 19:22:48.109058 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:22:48.219929 sshd[4185]: Connection closed by 10.0.0.1 port 43968 Feb 13 19:22:48.220639 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:48.224001 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:43968.service: Deactivated successfully. Feb 13 19:22:48.225608 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:22:48.226200 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:22:48.227171 systemd-logind[1426]: Removed session 19. Feb 13 19:22:53.231951 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Feb 13 19:22:53.275980 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:53.277438 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:53.281601 systemd-logind[1426]: New session 20 of user core. Feb 13 19:22:53.293090 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:22:53.406646 sshd[4204]: Connection closed by 10.0.0.1 port 56380 Feb 13 19:22:53.407122 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:53.410262 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:56380.service: Deactivated successfully. Feb 13 19:22:53.413460 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:22:53.414132 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:22:53.415171 systemd-logind[1426]: Removed session 20. Feb 13 19:22:58.422393 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:56390.service - OpenSSH per-connection server daemon (10.0.0.1:56390). Feb 13 19:22:58.461784 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 56390 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:58.462873 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:58.466540 systemd-logind[1426]: New session 21 of user core. Feb 13 19:22:58.476046 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:22:58.583977 sshd[4219]: Connection closed by 10.0.0.1 port 56390 Feb 13 19:22:58.584310 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:58.587718 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:56390.service: Deactivated successfully. Feb 13 19:22:58.589276 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:22:58.589878 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:22:58.590880 systemd-logind[1426]: Removed session 21. Feb 13 19:23:03.597451 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:60534.service - OpenSSH per-connection server daemon (10.0.0.1:60534). Feb 13 19:23:03.645806 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 60534 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:03.647241 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:03.651081 systemd-logind[1426]: New session 22 of user core. Feb 13 19:23:03.657052 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:23:03.767214 sshd[4233]: Connection closed by 10.0.0.1 port 60534 Feb 13 19:23:03.767830 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:03.779504 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:60534.service: Deactivated successfully. Feb 13 19:23:03.782181 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:23:03.783557 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:23:03.789208 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:60540.service - OpenSSH per-connection server daemon (10.0.0.1:60540). Feb 13 19:23:03.790474 systemd-logind[1426]: Removed session 22. Feb 13 19:23:03.825321 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 60540 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:03.826539 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:03.830595 systemd-logind[1426]: New session 23 of user core. Feb 13 19:23:03.838066 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:23:04.995068 kubelet[2611]: E0213 19:23:04.995028 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:05.548633 containerd[1445]: time="2025-02-13T19:23:05.548570617Z" level=info msg="StopContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" with timeout 30 (s)" Feb 13 19:23:05.549843 containerd[1445]: time="2025-02-13T19:23:05.549797217Z" level=info msg="Stop container \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" with signal terminated" Feb 13 19:23:05.562492 systemd[1]: cri-containerd-34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5.scope: Deactivated successfully. Feb 13 19:23:05.587585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5-rootfs.mount: Deactivated successfully. Feb 13 19:23:05.610514 containerd[1445]: time="2025-02-13T19:23:05.610068235Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:23:05.611249 containerd[1445]: time="2025-02-13T19:23:05.610723333Z" level=info msg="shim disconnected" id=34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5 namespace=k8s.io Feb 13 19:23:05.611524 containerd[1445]: time="2025-02-13T19:23:05.611323073Z" level=warning msg="cleaning up after shim disconnected" id=34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5 namespace=k8s.io Feb 13 19:23:05.611524 containerd[1445]: time="2025-02-13T19:23:05.611348873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:05.614349 containerd[1445]: time="2025-02-13T19:23:05.614312575Z" level=info msg="StopContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" with timeout 2 (s)" Feb 13 19:23:05.614599 containerd[1445]: time="2025-02-13T19:23:05.614571687Z" level=info msg="Stop container \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" with signal terminated" Feb 13 19:23:05.621934 systemd-networkd[1373]: lxc_health: Link DOWN Feb 13 19:23:05.621941 systemd-networkd[1373]: lxc_health: Lost carrier Feb 13 19:23:05.647630 systemd[1]: cri-containerd-fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b.scope: Deactivated successfully. Feb 13 19:23:05.648018 systemd[1]: cri-containerd-fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b.scope: Consumed 6.542s CPU time. Feb 13 19:23:05.663729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b-rootfs.mount: Deactivated successfully. Feb 13 19:23:05.672005 containerd[1445]: time="2025-02-13T19:23:05.671921560Z" level=info msg="StopContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" returns successfully" Feb 13 19:23:05.672522 containerd[1445]: time="2025-02-13T19:23:05.672466742Z" level=info msg="shim disconnected" id=fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b namespace=k8s.io Feb 13 19:23:05.672522 containerd[1445]: time="2025-02-13T19:23:05.672518061Z" level=warning msg="cleaning up after shim disconnected" id=fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b namespace=k8s.io Feb 13 19:23:05.672637 containerd[1445]: time="2025-02-13T19:23:05.672528380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:05.674544 containerd[1445]: time="2025-02-13T19:23:05.674497196Z" level=info msg="StopPodSandbox for \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\"" Feb 13 19:23:05.678475 containerd[1445]: time="2025-02-13T19:23:05.678417827Z" level=info msg="Container to stop \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.680001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4-shm.mount: Deactivated successfully. Feb 13 19:23:05.686541 systemd[1]: cri-containerd-4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4.scope: Deactivated successfully. Feb 13 19:23:05.691322 containerd[1445]: time="2025-02-13T19:23:05.691084730Z" level=info msg="StopContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" returns successfully" Feb 13 19:23:05.691777 containerd[1445]: time="2025-02-13T19:23:05.691746828Z" level=info msg="StopPodSandbox for \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\"" Feb 13 19:23:05.691841 containerd[1445]: time="2025-02-13T19:23:05.691803066Z" level=info msg="Container to stop \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.691841 containerd[1445]: time="2025-02-13T19:23:05.691817306Z" level=info msg="Container to stop \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.691841 containerd[1445]: time="2025-02-13T19:23:05.691826506Z" level=info msg="Container to stop \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.691841 containerd[1445]: time="2025-02-13T19:23:05.691835505Z" level=info msg="Container to stop \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.691953 containerd[1445]: time="2025-02-13T19:23:05.691843545Z" level=info msg="Container to stop \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:23:05.693406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313-shm.mount: Deactivated successfully. Feb 13 19:23:05.705461 systemd[1]: cri-containerd-cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313.scope: Deactivated successfully. Feb 13 19:23:05.711888 containerd[1445]: time="2025-02-13T19:23:05.711496699Z" level=info msg="shim disconnected" id=4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4 namespace=k8s.io Feb 13 19:23:05.711888 containerd[1445]: time="2025-02-13T19:23:05.711547857Z" level=warning msg="cleaning up after shim disconnected" id=4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4 namespace=k8s.io Feb 13 19:23:05.711888 containerd[1445]: time="2025-02-13T19:23:05.711556897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:05.725232 containerd[1445]: time="2025-02-13T19:23:05.725195248Z" level=info msg="TearDown network for sandbox \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\" successfully" Feb 13 19:23:05.725232 containerd[1445]: time="2025-02-13T19:23:05.725229327Z" level=info msg="StopPodSandbox for \"4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4\" returns successfully" Feb 13 19:23:05.752415 containerd[1445]: time="2025-02-13T19:23:05.752362435Z" level=info msg="shim disconnected" id=cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313 namespace=k8s.io Feb 13 19:23:05.752415 containerd[1445]: time="2025-02-13T19:23:05.752409433Z" level=warning msg="cleaning up after shim disconnected" id=cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313 namespace=k8s.io Feb 13 19:23:05.752415 containerd[1445]: time="2025-02-13T19:23:05.752419433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:05.753951 kubelet[2611]: I0213 19:23:05.753802 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkdb5\" (UniqueName: \"kubernetes.io/projected/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-kube-api-access-hkdb5\") pod \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\" (UID: \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\") " Feb 13 19:23:05.753951 kubelet[2611]: I0213 19:23:05.753846 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-cilium-config-path\") pod \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\" (UID: \"69f79891-4c91-49ca-bc3b-9cab4e2fd9ce\") " Feb 13 19:23:05.764562 kubelet[2611]: I0213 19:23:05.764513 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" (UID: "69f79891-4c91-49ca-bc3b-9cab4e2fd9ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:23:05.766022 containerd[1445]: time="2025-02-13T19:23:05.765987987Z" level=info msg="TearDown network for sandbox \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" successfully" Feb 13 19:23:05.766022 containerd[1445]: time="2025-02-13T19:23:05.766021346Z" level=info msg="StopPodSandbox for \"cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313\" returns successfully" Feb 13 19:23:05.766158 kubelet[2611]: I0213 19:23:05.766112 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-kube-api-access-hkdb5" (OuterVolumeSpecName: "kube-api-access-hkdb5") pod "69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" (UID: "69f79891-4c91-49ca-bc3b-9cab4e2fd9ce"). InnerVolumeSpecName "kube-api-access-hkdb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.854984 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-cgroup\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.855052 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-hostproc\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.855069 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-bpf-maps\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.855085 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-xtables-lock\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.855099 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-kernel\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855134 kubelet[2611]: I0213 19:23:05.855099 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.855899 kubelet[2611]: I0213 19:23:05.855114 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cni-path\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.855996 kubelet[2611]: I0213 19:23:05.855973 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj9l8\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-kube-api-access-vj9l8\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856025 kubelet[2611]: I0213 19:23:05.856003 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-net\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856051 kubelet[2611]: I0213 19:23:05.856025 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afe3804c-277a-4f14-ab0c-6e903c6ef560-clustermesh-secrets\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856051 kubelet[2611]: I0213 19:23:05.856041 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-run\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856103 kubelet[2611]: I0213 19:23:05.856055 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-lib-modules\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856103 kubelet[2611]: I0213 19:23:05.856072 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-hubble-tls\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856103 kubelet[2611]: I0213 19:23:05.856088 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-config-path\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.855138 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cni-path" (OuterVolumeSpecName: "cni-path") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.855158 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.856210 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-etc-cni-netd\") pod \"afe3804c-277a-4f14-ab0c-6e903c6ef560\" (UID: \"afe3804c-277a-4f14-ab0c-6e903c6ef560\") " Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.856319 2611 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.856333 2611 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.856342 2611 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.856489 kubelet[2611]: I0213 19:23:05.856350 2611 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.856846 kubelet[2611]: I0213 19:23:05.856361 2611 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hkdb5\" (UniqueName: \"kubernetes.io/projected/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce-kube-api-access-hkdb5\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.856846 kubelet[2611]: I0213 19:23:05.855166 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856846 kubelet[2611]: I0213 19:23:05.855168 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856846 kubelet[2611]: I0213 19:23:05.855185 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-hostproc" (OuterVolumeSpecName: "hostproc") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856846 kubelet[2611]: I0213 19:23:05.856234 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856980 kubelet[2611]: I0213 19:23:05.856251 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.856980 kubelet[2611]: I0213 19:23:05.856412 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.858437 kubelet[2611]: I0213 19:23:05.858396 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:23:05.858486 kubelet[2611]: I0213 19:23:05.858451 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:23:05.859460 kubelet[2611]: I0213 19:23:05.859415 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe3804c-277a-4f14-ab0c-6e903c6ef560-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:23:05.859460 kubelet[2611]: I0213 19:23:05.859436 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:23:05.859712 kubelet[2611]: I0213 19:23:05.859671 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-kube-api-access-vj9l8" (OuterVolumeSpecName: "kube-api-access-vj9l8") pod "afe3804c-277a-4f14-ab0c-6e903c6ef560" (UID: "afe3804c-277a-4f14-ab0c-6e903c6ef560"). InnerVolumeSpecName "kube-api-access-vj9l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957091 2611 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957127 2611 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vj9l8\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-kube-api-access-vj9l8\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957144 2611 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957156 2611 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afe3804c-277a-4f14-ab0c-6e903c6ef560-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957165 2611 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957157 kubelet[2611]: I0213 19:23:05.957173 2611 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957471 kubelet[2611]: I0213 19:23:05.957181 2611 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957471 kubelet[2611]: I0213 19:23:05.957188 2611 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afe3804c-277a-4f14-ab0c-6e903c6ef560-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957471 kubelet[2611]: I0213 19:23:05.957196 2611 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afe3804c-277a-4f14-ab0c-6e903c6ef560-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957471 kubelet[2611]: I0213 19:23:05.957203 2611 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:05.957471 kubelet[2611]: I0213 19:23:05.957212 2611 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afe3804c-277a-4f14-ab0c-6e903c6ef560-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:23:06.003099 systemd[1]: Removed slice kubepods-besteffort-pod69f79891_4c91_49ca_bc3b_9cab4e2fd9ce.slice - libcontainer container kubepods-besteffort-pod69f79891_4c91_49ca_bc3b_9cab4e2fd9ce.slice. Feb 13 19:23:06.004739 systemd[1]: Removed slice kubepods-burstable-podafe3804c_277a_4f14_ab0c_6e903c6ef560.slice - libcontainer container kubepods-burstable-podafe3804c_277a_4f14_ab0c_6e903c6ef560.slice. Feb 13 19:23:06.005095 systemd[1]: kubepods-burstable-podafe3804c_277a_4f14_ab0c_6e903c6ef560.slice: Consumed 6.687s CPU time. Feb 13 19:23:06.183027 kubelet[2611]: I0213 19:23:06.183002 2611 scope.go:117] "RemoveContainer" containerID="34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5" Feb 13 19:23:06.186009 containerd[1445]: time="2025-02-13T19:23:06.185844374Z" level=info msg="RemoveContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\"" Feb 13 19:23:06.190009 containerd[1445]: time="2025-02-13T19:23:06.189980648Z" level=info msg="RemoveContainer for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" returns successfully" Feb 13 19:23:06.190577 kubelet[2611]: I0213 19:23:06.190302 2611 scope.go:117] "RemoveContainer" containerID="34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5" Feb 13 19:23:06.190653 containerd[1445]: time="2025-02-13T19:23:06.190501392Z" level=error msg="ContainerStatus for \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\": not found" Feb 13 19:23:06.193319 kubelet[2611]: E0213 19:23:06.193281 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\": not found" containerID="34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5" Feb 13 19:23:06.193410 kubelet[2611]: I0213 19:23:06.193328 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5"} err="failed to get container status \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\": rpc error: code = NotFound desc = an error occurred when try to find container \"34453c01956d2f8d1e33d6836b4677612f9ae5257cd9da1ff8ea730f14b79ef5\": not found" Feb 13 19:23:06.193450 kubelet[2611]: I0213 19:23:06.193411 2611 scope.go:117] "RemoveContainer" containerID="fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b" Feb 13 19:23:06.194689 containerd[1445]: time="2025-02-13T19:23:06.194659425Z" level=info msg="RemoveContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\"" Feb 13 19:23:06.199337 containerd[1445]: time="2025-02-13T19:23:06.199269045Z" level=info msg="RemoveContainer for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" returns successfully" Feb 13 19:23:06.199751 kubelet[2611]: I0213 19:23:06.199706 2611 scope.go:117] "RemoveContainer" containerID="dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff" Feb 13 19:23:06.200885 containerd[1445]: time="2025-02-13T19:23:06.200860316Z" level=info msg="RemoveContainer for \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\"" Feb 13 19:23:06.203824 containerd[1445]: time="2025-02-13T19:23:06.203728388Z" level=info msg="RemoveContainer for \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\" returns successfully" Feb 13 19:23:06.204144 kubelet[2611]: I0213 19:23:06.204025 2611 scope.go:117] "RemoveContainer" containerID="346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33" Feb 13 19:23:06.205551 containerd[1445]: time="2025-02-13T19:23:06.205525414Z" level=info msg="RemoveContainer for \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\"" Feb 13 19:23:06.208611 containerd[1445]: time="2025-02-13T19:23:06.208573961Z" level=info msg="RemoveContainer for \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\" returns successfully" Feb 13 19:23:06.208762 kubelet[2611]: I0213 19:23:06.208738 2611 scope.go:117] "RemoveContainer" containerID="e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4" Feb 13 19:23:06.209642 containerd[1445]: time="2025-02-13T19:23:06.209611329Z" level=info msg="RemoveContainer for \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\"" Feb 13 19:23:06.211841 containerd[1445]: time="2025-02-13T19:23:06.211806062Z" level=info msg="RemoveContainer for \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\" returns successfully" Feb 13 19:23:06.212024 kubelet[2611]: I0213 19:23:06.212000 2611 scope.go:117] "RemoveContainer" containerID="d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6" Feb 13 19:23:06.212812 containerd[1445]: time="2025-02-13T19:23:06.212764673Z" level=info msg="RemoveContainer for \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\"" Feb 13 19:23:06.214782 containerd[1445]: time="2025-02-13T19:23:06.214750052Z" level=info msg="RemoveContainer for \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\" returns successfully" Feb 13 19:23:06.214930 kubelet[2611]: I0213 19:23:06.214901 2611 scope.go:117] "RemoveContainer" containerID="fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b" Feb 13 19:23:06.215104 containerd[1445]: time="2025-02-13T19:23:06.215065242Z" level=error msg="ContainerStatus for \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\": not found" Feb 13 19:23:06.215242 kubelet[2611]: E0213 19:23:06.215218 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\": not found" containerID="fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b" Feb 13 19:23:06.215280 kubelet[2611]: I0213 19:23:06.215250 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b"} err="failed to get container status \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdd89e399c1d27578fbca7465f191ae300bbc3ecdfc447cd7aa1bcbfeebfd27b\": not found" Feb 13 19:23:06.215280 kubelet[2611]: I0213 19:23:06.215269 2611 scope.go:117] "RemoveContainer" containerID="dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff" Feb 13 19:23:06.215447 containerd[1445]: time="2025-02-13T19:23:06.215413832Z" level=error msg="ContainerStatus for \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\": not found" Feb 13 19:23:06.215677 kubelet[2611]: E0213 19:23:06.215561 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\": not found" containerID="dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff" Feb 13 19:23:06.215677 kubelet[2611]: I0213 19:23:06.215590 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff"} err="failed to get container status \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfaaf6dc342935b19a6594252cdac5b6a0e602fc72eeea9aa8b6b238314616ff\": not found" Feb 13 19:23:06.215677 kubelet[2611]: I0213 19:23:06.215606 2611 scope.go:117] "RemoveContainer" containerID="346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33" Feb 13 19:23:06.215790 containerd[1445]: time="2025-02-13T19:23:06.215771381Z" level=error msg="ContainerStatus for \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\": not found" Feb 13 19:23:06.215873 kubelet[2611]: E0213 19:23:06.215853 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\": not found" containerID="346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33" Feb 13 19:23:06.216072 kubelet[2611]: I0213 19:23:06.215874 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33"} err="failed to get container status \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\": rpc error: code = NotFound desc = an error occurred when try to find container \"346ae3dd3b7e4a559336b08422e78e4dc101a83dad5647a0551b29a074616a33\": not found" Feb 13 19:23:06.216072 kubelet[2611]: I0213 19:23:06.215888 2611 scope.go:117] "RemoveContainer" containerID="e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4" Feb 13 19:23:06.216211 containerd[1445]: time="2025-02-13T19:23:06.216036773Z" level=error msg="ContainerStatus for \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\": not found" Feb 13 19:23:06.216243 kubelet[2611]: E0213 19:23:06.216137 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\": not found" containerID="e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4" Feb 13 19:23:06.216243 kubelet[2611]: I0213 19:23:06.216162 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4"} err="failed to get container status \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e543c9905724c1868cd80035466926a902c3fe09b2646b4d35007a6ac017ebd4\": not found" Feb 13 19:23:06.216243 kubelet[2611]: I0213 19:23:06.216177 2611 scope.go:117] "RemoveContainer" containerID="d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6" Feb 13 19:23:06.216668 containerd[1445]: time="2025-02-13T19:23:06.216560557Z" level=error msg="ContainerStatus for \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\": not found" Feb 13 19:23:06.216755 kubelet[2611]: E0213 19:23:06.216673 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\": not found" containerID="d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6" Feb 13 19:23:06.216755 kubelet[2611]: I0213 19:23:06.216690 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6"} err="failed to get container status \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d09b8a75596574bf6d5909203c8287f706481108bd7ce8a462694e456c32f1e6\": not found" Feb 13 19:23:06.558887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d21f3b17cac6e2237a2402adaab7db11c736d33b1420b56a6e68656480fa4e4-rootfs.mount: Deactivated successfully. Feb 13 19:23:06.559010 systemd[1]: var-lib-kubelet-pods-69f79891\x2d4c91\x2d49ca\x2dbc3b\x2d9cab4e2fd9ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkdb5.mount: Deactivated successfully. Feb 13 19:23:06.559071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd9176cc50c035a248517ccfee62e24ee5f503dfbf466fd401a31e72bc490313-rootfs.mount: Deactivated successfully. Feb 13 19:23:06.559127 systemd[1]: var-lib-kubelet-pods-afe3804c\x2d277a\x2d4f14\x2dab0c\x2d6e903c6ef560-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvj9l8.mount: Deactivated successfully. Feb 13 19:23:06.559182 systemd[1]: var-lib-kubelet-pods-afe3804c\x2d277a\x2d4f14\x2dab0c\x2d6e903c6ef560-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:23:06.559241 systemd[1]: var-lib-kubelet-pods-afe3804c\x2d277a\x2d4f14\x2dab0c\x2d6e903c6ef560-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:23:07.042668 kubelet[2611]: E0213 19:23:07.042502 2611 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:23:07.514569 sshd[4247]: Connection closed by 10.0.0.1 port 60540 Feb 13 19:23:07.516173 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:07.522640 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:60540.service: Deactivated successfully. Feb 13 19:23:07.527397 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:23:07.527643 systemd[1]: session-23.scope: Consumed 1.053s CPU time. Feb 13 19:23:07.529632 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:23:07.536862 systemd-logind[1426]: Removed session 23. Feb 13 19:23:07.548196 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). Feb 13 19:23:07.594968 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:07.596280 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:07.601339 systemd-logind[1426]: New session 24 of user core. Feb 13 19:23:07.613204 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:23:07.999648 kubelet[2611]: I0213 19:23:07.998770 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" path="/var/lib/kubelet/pods/69f79891-4c91-49ca-bc3b-9cab4e2fd9ce/volumes" Feb 13 19:23:07.999648 kubelet[2611]: I0213 19:23:07.999184 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" path="/var/lib/kubelet/pods/afe3804c-277a-4f14-ab0c-6e903c6ef560/volumes" Feb 13 19:23:08.520630 sshd[4414]: Connection closed by 10.0.0.1 port 60556 Feb 13 19:23:08.521150 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:08.534370 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:60556.service: Deactivated successfully. Feb 13 19:23:08.539480 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:23:08.543229 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:23:08.551259 kubelet[2611]: I0213 19:23:08.543664 2611 topology_manager.go:215] "Topology Admit Handler" podUID="68bb76a9-cfc9-44d7-b0b7-4d3903979eb3" podNamespace="kube-system" podName="cilium-wqs8v" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543725 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="mount-bpf-fs" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543735 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="mount-cgroup" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543742 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="apply-sysctl-overwrites" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543748 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="clean-cilium-state" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543755 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" containerName="cilium-operator" Feb 13 19:23:08.551259 kubelet[2611]: E0213 19:23:08.543763 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="cilium-agent" Feb 13 19:23:08.551259 kubelet[2611]: I0213 19:23:08.543781 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f79891-4c91-49ca-bc3b-9cab4e2fd9ce" containerName="cilium-operator" Feb 13 19:23:08.551259 kubelet[2611]: I0213 19:23:08.543787 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe3804c-277a-4f14-ab0c-6e903c6ef560" containerName="cilium-agent" Feb 13 19:23:08.553079 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:60566.service - OpenSSH per-connection server daemon (10.0.0.1:60566). Feb 13 19:23:08.557497 systemd-logind[1426]: Removed session 24. Feb 13 19:23:08.570674 kubelet[2611]: I0213 19:23:08.570630 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-etc-cni-netd\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.571922 kubelet[2611]: I0213 19:23:08.571881 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-clustermesh-secrets\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.572038 kubelet[2611]: I0213 19:23:08.572025 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-hubble-tls\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.572401 systemd[1]: Created slice kubepods-burstable-pod68bb76a9_cfc9_44d7_b0b7_4d3903979eb3.slice - libcontainer container kubepods-burstable-pod68bb76a9_cfc9_44d7_b0b7_4d3903979eb3.slice. Feb 13 19:23:08.573806 kubelet[2611]: I0213 19:23:08.573782 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-bpf-maps\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.573951 kubelet[2611]: I0213 19:23:08.573923 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-hostproc\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574021 kubelet[2611]: I0213 19:23:08.574008 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-host-proc-sys-kernel\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574093 kubelet[2611]: I0213 19:23:08.574081 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-cilium-config-path\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574166 kubelet[2611]: I0213 19:23:08.574154 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-host-proc-sys-net\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574260 kubelet[2611]: I0213 19:23:08.574217 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-lib-modules\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574481 kubelet[2611]: I0213 19:23:08.574373 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-xtables-lock\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574609 kubelet[2611]: I0213 19:23:08.574573 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-cilium-ipsec-secrets\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574682 kubelet[2611]: I0213 19:23:08.574669 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qvg\" (UniqueName: \"kubernetes.io/projected/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-kube-api-access-q6qvg\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574793 kubelet[2611]: I0213 19:23:08.574731 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-cilium-run\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.574903 kubelet[2611]: I0213 19:23:08.574886 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-cilium-cgroup\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.580965 kubelet[2611]: I0213 19:23:08.580882 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68bb76a9-cfc9-44d7-b0b7-4d3903979eb3-cni-path\") pod \"cilium-wqs8v\" (UID: \"68bb76a9-cfc9-44d7-b0b7-4d3903979eb3\") " pod="kube-system/cilium-wqs8v" Feb 13 19:23:08.601627 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 60566 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:08.602946 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:08.606775 systemd-logind[1426]: New session 25 of user core. Feb 13 19:23:08.618169 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:23:08.668190 sshd[4429]: Connection closed by 10.0.0.1 port 60566 Feb 13 19:23:08.668547 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:08.681717 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:60566.service: Deactivated successfully. Feb 13 19:23:08.688707 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:23:08.690519 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:23:08.699240 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:60570.service - OpenSSH per-connection server daemon (10.0.0.1:60570). Feb 13 19:23:08.705292 systemd-logind[1426]: Removed session 25. Feb 13 19:23:08.737383 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 60570 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:08.738672 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:08.742240 systemd-logind[1426]: New session 26 of user core. Feb 13 19:23:08.759103 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:23:08.886173 kubelet[2611]: E0213 19:23:08.886051 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:08.887525 containerd[1445]: time="2025-02-13T19:23:08.886540631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqs8v,Uid:68bb76a9-cfc9-44d7-b0b7-4d3903979eb3,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:08.909500 containerd[1445]: time="2025-02-13T19:23:08.909284719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:08.909500 containerd[1445]: time="2025-02-13T19:23:08.909336238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:08.909500 containerd[1445]: time="2025-02-13T19:23:08.909347197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:08.909500 containerd[1445]: time="2025-02-13T19:23:08.909417075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:08.928082 systemd[1]: Started cri-containerd-4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa.scope - libcontainer container 4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa. Feb 13 19:23:08.949552 containerd[1445]: time="2025-02-13T19:23:08.949518672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqs8v,Uid:68bb76a9-cfc9-44d7-b0b7-4d3903979eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\"" Feb 13 19:23:08.950257 kubelet[2611]: E0213 19:23:08.950237 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:08.952086 containerd[1445]: time="2025-02-13T19:23:08.951973648Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:23:08.961419 containerd[1445]: time="2025-02-13T19:23:08.961370123Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b\"" Feb 13 19:23:08.961883 containerd[1445]: time="2025-02-13T19:23:08.961852470Z" level=info msg="StartContainer for \"f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b\"" Feb 13 19:23:08.986080 systemd[1]: Started cri-containerd-f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b.scope - libcontainer container f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b. Feb 13 19:23:09.009087 containerd[1445]: time="2025-02-13T19:23:09.009044659Z" level=info msg="StartContainer for \"f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b\" returns successfully" Feb 13 19:23:09.024142 systemd[1]: cri-containerd-f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b.scope: Deactivated successfully. Feb 13 19:23:09.052231 containerd[1445]: time="2025-02-13T19:23:09.052171669Z" level=info msg="shim disconnected" id=f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b namespace=k8s.io Feb 13 19:23:09.052231 containerd[1445]: time="2025-02-13T19:23:09.052227788Z" level=warning msg="cleaning up after shim disconnected" id=f2588582deb588702ce4e6af094303a0543f3eefbd8303770668e99725b8ec2b namespace=k8s.io Feb 13 19:23:09.052231 containerd[1445]: time="2025-02-13T19:23:09.052236708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:09.196695 kubelet[2611]: E0213 19:23:09.196658 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:09.200394 containerd[1445]: time="2025-02-13T19:23:09.200297051Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:23:09.209740 containerd[1445]: time="2025-02-13T19:23:09.209688147Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81\"" Feb 13 19:23:09.210412 containerd[1445]: time="2025-02-13T19:23:09.210262693Z" level=info msg="StartContainer for \"109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81\"" Feb 13 19:23:09.247110 systemd[1]: Started cri-containerd-109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81.scope - libcontainer container 109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81. Feb 13 19:23:09.269020 containerd[1445]: time="2025-02-13T19:23:09.268956451Z" level=info msg="StartContainer for \"109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81\" returns successfully" Feb 13 19:23:09.278957 systemd[1]: cri-containerd-109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81.scope: Deactivated successfully. Feb 13 19:23:09.301338 containerd[1445]: time="2025-02-13T19:23:09.301277439Z" level=info msg="shim disconnected" id=109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81 namespace=k8s.io Feb 13 19:23:09.301338 containerd[1445]: time="2025-02-13T19:23:09.301330718Z" level=warning msg="cleaning up after shim disconnected" id=109eae9ad7ba503e3d97c4ec0a2960615381d7dd8af89d39a692e01754abea81 namespace=k8s.io Feb 13 19:23:09.301338 containerd[1445]: time="2025-02-13T19:23:09.301339397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:10.201763 kubelet[2611]: E0213 19:23:10.201729 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:10.209271 containerd[1445]: time="2025-02-13T19:23:10.209202264Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:23:10.223875 containerd[1445]: time="2025-02-13T19:23:10.223809426Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58\"" Feb 13 19:23:10.225891 containerd[1445]: time="2025-02-13T19:23:10.225856981Z" level=info msg="StartContainer for \"1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58\"" Feb 13 19:23:10.250079 systemd[1]: Started cri-containerd-1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58.scope - libcontainer container 1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58. Feb 13 19:23:10.277418 containerd[1445]: time="2025-02-13T19:23:10.277368338Z" level=info msg="StartContainer for \"1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58\" returns successfully" Feb 13 19:23:10.278232 systemd[1]: cri-containerd-1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58.scope: Deactivated successfully. Feb 13 19:23:10.301585 containerd[1445]: time="2025-02-13T19:23:10.301525531Z" level=info msg="shim disconnected" id=1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58 namespace=k8s.io Feb 13 19:23:10.301585 containerd[1445]: time="2025-02-13T19:23:10.301579450Z" level=warning msg="cleaning up after shim disconnected" id=1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58 namespace=k8s.io Feb 13 19:23:10.301585 containerd[1445]: time="2025-02-13T19:23:10.301588409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:10.686815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1057f4ad3b2cb01e57e0506bc157a065674f6dd969ccb7fb752e8e38bf2b7b58-rootfs.mount: Deactivated successfully. Feb 13 19:23:11.205499 kubelet[2611]: E0213 19:23:11.205369 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:11.209282 containerd[1445]: time="2025-02-13T19:23:11.209244312Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:23:11.219455 containerd[1445]: time="2025-02-13T19:23:11.219294873Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b\"" Feb 13 19:23:11.220387 containerd[1445]: time="2025-02-13T19:23:11.220355612Z" level=info msg="StartContainer for \"1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b\"" Feb 13 19:23:11.253088 systemd[1]: Started cri-containerd-1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b.scope - libcontainer container 1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b. Feb 13 19:23:11.272873 systemd[1]: cri-containerd-1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b.scope: Deactivated successfully. Feb 13 19:23:11.274402 containerd[1445]: time="2025-02-13T19:23:11.274364183Z" level=info msg="StartContainer for \"1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b\" returns successfully" Feb 13 19:23:11.288960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b-rootfs.mount: Deactivated successfully. Feb 13 19:23:11.292650 containerd[1445]: time="2025-02-13T19:23:11.292596982Z" level=info msg="shim disconnected" id=1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b namespace=k8s.io Feb 13 19:23:11.292650 containerd[1445]: time="2025-02-13T19:23:11.292648061Z" level=warning msg="cleaning up after shim disconnected" id=1e5dfb13bc8c9337f8868dd97db44e3261e9bb27574a8fff4000a0fa7e4de35b namespace=k8s.io Feb 13 19:23:11.292781 containerd[1445]: time="2025-02-13T19:23:11.292657101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:11.997714 kubelet[2611]: E0213 19:23:11.997387 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:12.046349 kubelet[2611]: E0213 19:23:12.043840 2611 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:23:12.211836 kubelet[2611]: E0213 19:23:12.211789 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:12.214724 containerd[1445]: time="2025-02-13T19:23:12.214630985Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:23:12.227590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123402981.mount: Deactivated successfully. Feb 13 19:23:12.233747 containerd[1445]: time="2025-02-13T19:23:12.233687445Z" level=info msg="CreateContainer within sandbox \"4ac4e404a4547b013c5d4ac2dc69995ecbd4ea8ce26a26fdb2ea745320c858aa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b\"" Feb 13 19:23:12.237144 containerd[1445]: time="2025-02-13T19:23:12.237061305Z" level=info msg="StartContainer for \"d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b\"" Feb 13 19:23:12.267126 systemd[1]: Started cri-containerd-d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b.scope - libcontainer container d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b. Feb 13 19:23:12.292410 containerd[1445]: time="2025-02-13T19:23:12.292370198Z" level=info msg="StartContainer for \"d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b\" returns successfully" Feb 13 19:23:12.546933 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:23:13.216826 kubelet[2611]: E0213 19:23:13.216773 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:13.234197 kubelet[2611]: I0213 19:23:13.234123 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqs8v" podStartSLOduration=5.234106711 podStartE2EDuration="5.234106711s" podCreationTimestamp="2025-02-13 19:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:13.233398682 +0000 UTC m=+81.320042007" watchObservedRunningTime="2025-02-13 19:23:13.234106711 +0000 UTC m=+81.320749876" Feb 13 19:23:13.965817 kubelet[2611]: I0213 19:23:13.965732 2611 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:23:13Z","lastTransitionTime":"2025-02-13T19:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:23:14.887676 kubelet[2611]: E0213 19:23:14.887599 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:15.327024 systemd-networkd[1373]: lxc_health: Link UP Feb 13 19:23:15.337855 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 19:23:16.771064 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 19:23:16.889622 kubelet[2611]: E0213 19:23:16.889581 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:17.223956 kubelet[2611]: E0213 19:23:17.223660 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:18.225647 kubelet[2611]: E0213 19:23:18.225603 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:19.995395 kubelet[2611]: E0213 19:23:19.995210 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:19.995395 kubelet[2611]: E0213 19:23:19.995301 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:21.466099 systemd[1]: run-containerd-runc-k8s.io-d9af3942ab2e2304e33856ac89dfe73e9018a1c1572959126de7619923b41b8b-runc.JZGUyv.mount: Deactivated successfully. Feb 13 19:23:21.516093 sshd[4443]: Connection closed by 10.0.0.1 port 60570 Feb 13 19:23:21.515373 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:21.517892 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:60570.service: Deactivated successfully. Feb 13 19:23:21.520216 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:23:21.521877 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:23:21.523184 systemd-logind[1426]: Removed session 26.