Feb 13 19:02:20.897227 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:02:20.897250 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:02:20.897260 kernel: KASLR enabled Feb 13 19:02:20.897266 kernel: efi: EFI v2.7 by EDK II Feb 13 19:02:20.897272 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:02:20.897278 kernel: random: crng init done Feb 13 19:02:20.897285 kernel: secureboot: Secure boot disabled Feb 13 19:02:20.897291 kernel: ACPI: Early table checksum verification disabled Feb 13 19:02:20.897298 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:02:20.897306 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:02:20.897312 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897318 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897324 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897331 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897338 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897346 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897353 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897359 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897366 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:20.897372 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:02:20.897379 kernel: NUMA: Failed to initialise from firmware Feb 13 19:02:20.897386 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:20.897392 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:02:20.897398 kernel: Zone ranges: Feb 13 19:02:20.897405 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:20.897413 kernel: DMA32 empty Feb 13 19:02:20.897419 kernel: Normal empty Feb 13 19:02:20.897426 kernel: Movable zone start for each node Feb 13 19:02:20.897432 kernel: Early memory node ranges Feb 13 19:02:20.897439 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:02:20.897445 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:02:20.897452 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:02:20.897458 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:02:20.897465 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:02:20.897480 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:02:20.897487 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:02:20.897494 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:02:20.897502 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:02:20.897508 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:20.897515 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:02:20.897524 kernel: psci: probing for conduit method from ACPI. Feb 13 19:02:20.897531 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:02:20.897538 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:02:20.897546 kernel: psci: Trusted OS migration not required Feb 13 19:02:20.897553 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:02:20.897561 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:02:20.897568 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:02:20.897575 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:02:20.897582 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:02:20.897589 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:02:20.897596 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:02:20.897602 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:02:20.897609 kernel: CPU features: detected: Spectre-v4 Feb 13 19:02:20.897617 kernel: CPU features: detected: Spectre-BHB Feb 13 19:02:20.897624 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:02:20.897631 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:02:20.897638 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:02:20.897645 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:02:20.897652 kernel: alternatives: applying boot alternatives Feb 13 19:02:20.897660 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:20.897667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:02:20.897674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:02:20.897681 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:02:20.897688 kernel: Fallback order for Node 0: 0 Feb 13 19:02:20.897697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:02:20.897704 kernel: Policy zone: DMA Feb 13 19:02:20.897711 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:02:20.897717 kernel: software IO TLB: area num 4. Feb 13 19:02:20.897725 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:02:20.897732 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Feb 13 19:02:20.897739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:02:20.897746 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:02:20.897754 kernel: rcu: RCU event tracing is enabled. Feb 13 19:02:20.897761 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:02:20.897768 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:02:20.897775 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:02:20.897784 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:02:20.897791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:02:20.897798 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:02:20.897804 kernel: GICv3: 256 SPIs implemented Feb 13 19:02:20.897811 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:02:20.897818 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:02:20.897825 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:02:20.897832 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:02:20.897839 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:02:20.897846 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:02:20.897853 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:02:20.897861 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:02:20.897868 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:02:20.897875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:02:20.897882 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:20.897889 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:02:20.897897 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:02:20.897904 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:02:20.897911 kernel: arm-pv: using stolen time PV Feb 13 19:02:20.897918 kernel: Console: colour dummy device 80x25 Feb 13 19:02:20.897925 kernel: ACPI: Core revision 20230628 Feb 13 19:02:20.897933 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:02:20.897941 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:02:20.897948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:02:20.897955 kernel: landlock: Up and running. Feb 13 19:02:20.897963 kernel: SELinux: Initializing. Feb 13 19:02:20.897970 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:20.897977 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:20.897984 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:02:20.897991 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:02:20.897999 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:02:20.898008 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:02:20.898015 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:02:20.898022 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:02:20.898029 kernel: Remapping and enabling EFI services. Feb 13 19:02:20.898036 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:02:20.898043 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:02:20.898050 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:02:20.898057 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:02:20.898064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:20.898073 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:02:20.898090 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:02:20.898102 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:02:20.898112 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:02:20.898119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:20.898127 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:02:20.898134 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:02:20.898141 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:02:20.898149 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:02:20.898158 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:20.898166 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:02:20.898173 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:02:20.898181 kernel: SMP: Total of 4 processors activated. Feb 13 19:02:20.898189 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:02:20.898196 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:02:20.898204 kernel: CPU features: detected: Common not Private translations Feb 13 19:02:20.898211 kernel: CPU features: detected: CRC32 instructions Feb 13 19:02:20.898220 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:02:20.898228 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:02:20.898235 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:02:20.898243 kernel: CPU features: detected: Privileged Access Never Feb 13 19:02:20.898250 kernel: CPU features: detected: RAS Extension Support Feb 13 19:02:20.898258 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:02:20.898266 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:02:20.898273 kernel: alternatives: applying system-wide alternatives Feb 13 19:02:20.898280 kernel: devtmpfs: initialized Feb 13 19:02:20.898289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:02:20.898297 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:02:20.898305 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:02:20.898312 kernel: SMBIOS 3.0.0 present. Feb 13 19:02:20.898319 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:02:20.898327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:02:20.898335 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:02:20.898343 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:02:20.898351 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:02:20.898361 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:02:20.898369 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Feb 13 19:02:20.898377 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:02:20.898385 kernel: cpuidle: using governor menu Feb 13 19:02:20.898393 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:02:20.898401 kernel: ASID allocator initialised with 32768 entries Feb 13 19:02:20.898408 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:02:20.898416 kernel: Serial: AMBA PL011 UART driver Feb 13 19:02:20.898424 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:02:20.898433 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:02:20.898441 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:02:20.898449 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:02:20.898457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:02:20.898465 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:02:20.898476 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:02:20.898484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:02:20.898492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:02:20.898500 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:02:20.898509 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:02:20.898517 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:02:20.898525 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:02:20.898533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:02:20.898541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:02:20.898549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:02:20.898558 kernel: ACPI: Interpreter enabled Feb 13 19:02:20.898566 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:02:20.898574 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:02:20.898582 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:02:20.898592 kernel: printk: console [ttyAMA0] enabled Feb 13 19:02:20.898600 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:02:20.898744 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:02:20.898827 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:02:20.898917 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:02:20.898991 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:02:20.899062 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:02:20.899076 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:02:20.899186 kernel: PCI host bridge to bus 0000:00 Feb 13 19:02:20.899280 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:02:20.899346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:02:20.899410 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:02:20.899478 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:02:20.899570 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:02:20.899659 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:02:20.899731 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:02:20.899800 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:02:20.899875 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:02:20.899946 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:02:20.900051 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:02:20.900146 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:02:20.900212 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:02:20.900273 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:02:20.900336 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:02:20.900346 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:02:20.900354 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:02:20.900361 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:02:20.900369 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:02:20.900380 kernel: iommu: Default domain type: Translated Feb 13 19:02:20.900388 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:02:20.900396 kernel: efivars: Registered efivars operations Feb 13 19:02:20.900403 kernel: vgaarb: loaded Feb 13 19:02:20.900411 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:02:20.900419 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:02:20.900428 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:02:20.900435 kernel: pnp: PnP ACPI init Feb 13 19:02:20.900593 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:02:20.900611 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:02:20.900619 kernel: NET: Registered PF_INET protocol family Feb 13 19:02:20.900627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:02:20.900635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:02:20.900643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:02:20.900651 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:02:20.900659 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:02:20.900667 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:02:20.900676 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:20.900684 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:20.900692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:02:20.900700 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:02:20.900707 kernel: kvm [1]: HYP mode not available Feb 13 19:02:20.900715 kernel: Initialise system trusted keyrings Feb 13 19:02:20.900723 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:02:20.900731 kernel: Key type asymmetric registered Feb 13 19:02:20.900739 kernel: Asymmetric key parser 'x509' registered Feb 13 19:02:20.900748 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:02:20.900755 kernel: io scheduler mq-deadline registered Feb 13 19:02:20.900763 kernel: io scheduler kyber registered Feb 13 19:02:20.900771 kernel: io scheduler bfq registered Feb 13 19:02:20.900778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:02:20.900786 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:02:20.900794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:02:20.900874 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:02:20.900885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:02:20.900895 kernel: thunder_xcv, ver 1.0 Feb 13 19:02:20.900903 kernel: thunder_bgx, ver 1.0 Feb 13 19:02:20.900911 kernel: nicpf, ver 1.0 Feb 13 19:02:20.900919 kernel: nicvf, ver 1.0 Feb 13 19:02:20.901002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:02:20.901072 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:02:20 UTC (1739473340) Feb 13 19:02:20.901105 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:02:20.901114 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:02:20.901124 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:02:20.901132 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:02:20.901140 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:02:20.901148 kernel: Segment Routing with IPv6 Feb 13 19:02:20.901156 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:02:20.901163 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:02:20.901171 kernel: Key type dns_resolver registered Feb 13 19:02:20.901179 kernel: registered taskstats version 1 Feb 13 19:02:20.901186 kernel: Loading compiled-in X.509 certificates Feb 13 19:02:20.901194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:02:20.901204 kernel: Key type .fscrypt registered Feb 13 19:02:20.901212 kernel: Key type fscrypt-provisioning registered Feb 13 19:02:20.901220 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:02:20.901228 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:02:20.901235 kernel: ima: No architecture policies found Feb 13 19:02:20.901243 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:02:20.901251 kernel: clk: Disabling unused clocks Feb 13 19:02:20.901258 kernel: Freeing unused kernel memory: 38336K Feb 13 19:02:20.901267 kernel: Run /init as init process Feb 13 19:02:20.901275 kernel: with arguments: Feb 13 19:02:20.901283 kernel: /init Feb 13 19:02:20.901290 kernel: with environment: Feb 13 19:02:20.901298 kernel: HOME=/ Feb 13 19:02:20.901306 kernel: TERM=linux Feb 13 19:02:20.901314 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:02:20.901323 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:02:20.901333 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:20.901344 systemd[1]: Detected virtualization kvm. Feb 13 19:02:20.901352 systemd[1]: Detected architecture arm64. Feb 13 19:02:20.901360 systemd[1]: Running in initrd. Feb 13 19:02:20.901368 systemd[1]: No hostname configured, using default hostname. Feb 13 19:02:20.901376 systemd[1]: Hostname set to . Feb 13 19:02:20.901384 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:20.901392 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:02:20.901401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:20.901410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:20.901419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:02:20.901427 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:20.901436 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:02:20.901445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:02:20.901454 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:02:20.901464 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:02:20.901481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:20.901490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:20.901498 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:20.901507 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:20.901515 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:20.901523 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:20.901532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:20.901540 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:20.901551 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:02:20.901559 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:02:20.901568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:20.901576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:20.901584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:20.901593 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:20.901601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:02:20.901609 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:20.901619 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:02:20.901627 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:02:20.901636 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:20.901644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:20.901653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:20.901661 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:20.901669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:20.901679 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:02:20.901688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:02:20.901696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:20.901725 systemd-journald[240]: Collecting audit messages is disabled. Feb 13 19:02:20.901748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:20.901756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:20.901765 systemd-journald[240]: Journal started Feb 13 19:02:20.901785 systemd-journald[240]: Runtime Journal (/run/log/journal/3a010ff1acf944f2be1345bbcc8c8895) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:02:20.892664 systemd-modules-load[241]: Inserted module 'overlay' Feb 13 19:02:20.905490 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:20.908100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:02:20.910564 kernel: Bridge firewalling registered Feb 13 19:02:20.908692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:20.910024 systemd-modules-load[241]: Inserted module 'br_netfilter' Feb 13 19:02:20.911333 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:20.915074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:20.919284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:20.922333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:20.924540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:20.927944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:20.929073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:20.941254 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:02:20.943212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:20.953405 dracut-cmdline[276]: dracut-dracut-053 Feb 13 19:02:20.956194 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:20.978962 systemd-resolved[278]: Positive Trust Anchors: Feb 13 19:02:20.978983 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:20.979014 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:20.983822 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 19:02:20.987066 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:20.988028 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:21.028133 kernel: SCSI subsystem initialized Feb 13 19:02:21.033141 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:02:21.041105 kernel: iscsi: registered transport (tcp) Feb 13 19:02:21.055101 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:02:21.055146 kernel: QLogic iSCSI HBA Driver Feb 13 19:02:21.104488 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:21.114247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:02:21.133240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:02:21.133290 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:02:21.134099 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:02:21.181115 kernel: raid6: neonx8 gen() 15777 MB/s Feb 13 19:02:21.198105 kernel: raid6: neonx4 gen() 15783 MB/s Feb 13 19:02:21.215100 kernel: raid6: neonx2 gen() 13199 MB/s Feb 13 19:02:21.232103 kernel: raid6: neonx1 gen() 10533 MB/s Feb 13 19:02:21.249103 kernel: raid6: int64x8 gen() 6780 MB/s Feb 13 19:02:21.266116 kernel: raid6: int64x4 gen() 7346 MB/s Feb 13 19:02:21.283103 kernel: raid6: int64x2 gen() 6104 MB/s Feb 13 19:02:21.300101 kernel: raid6: int64x1 gen() 5053 MB/s Feb 13 19:02:21.300121 kernel: raid6: using algorithm neonx4 gen() 15783 MB/s Feb 13 19:02:21.317112 kernel: raid6: .... xor() 12351 MB/s, rmw enabled Feb 13 19:02:21.317131 kernel: raid6: using neon recovery algorithm Feb 13 19:02:21.322168 kernel: xor: measuring software checksum speed Feb 13 19:02:21.322187 kernel: 8regs : 21601 MB/sec Feb 13 19:02:21.323222 kernel: 32regs : 21704 MB/sec Feb 13 19:02:21.323243 kernel: arm64_neon : 27927 MB/sec Feb 13 19:02:21.323253 kernel: xor: using function: arm64_neon (27927 MB/sec) Feb 13 19:02:21.376128 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:02:21.386459 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:21.399269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:21.413271 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:02:21.417064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:21.428270 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:02:21.439504 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:02:21.467136 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:21.479274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:21.519169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:21.529268 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:02:21.540527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:21.542128 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:21.543725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:21.545396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:21.553458 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:02:21.563907 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:21.577102 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:02:21.590164 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:02:21.590290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:02:21.590310 kernel: GPT:9289727 != 19775487 Feb 13 19:02:21.590321 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:02:21.590331 kernel: GPT:9289727 != 19775487 Feb 13 19:02:21.590340 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:02:21.590350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:21.579225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:21.579348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:21.583711 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:21.584714 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:21.584962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:21.586921 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:21.596384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:21.605172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:21.610107 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) Feb 13 19:02:21.613109 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (514) Feb 13 19:02:21.623649 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:02:21.642634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:02:21.651126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:02:21.658001 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:02:21.658967 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:02:21.671227 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:02:21.675238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:21.692338 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:21.709023 disk-uuid[552]: Primary Header is updated. Feb 13 19:02:21.709023 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:02:21.709023 disk-uuid[552]: Secondary Header is updated. Feb 13 19:02:21.716136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:22.726117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:22.726721 disk-uuid[561]: The operation has completed successfully. Feb 13 19:02:22.747976 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:02:22.748095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:02:22.793625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:02:22.797113 sh[573]: Success Feb 13 19:02:22.812339 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:02:22.848721 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:02:22.857553 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:02:22.861134 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:02:22.871251 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:02:22.871291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:22.871303 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:02:22.872659 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:02:22.872672 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:02:22.876584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:02:22.877748 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:02:22.887286 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:02:22.888608 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:02:22.898304 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:22.898348 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:22.898360 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:22.901123 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:22.907978 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:02:22.909195 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:22.919212 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:02:22.926338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:02:22.990480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:22.998239 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:23.021642 ignition[672]: Ignition 2.20.0 Feb 13 19:02:23.021652 ignition[672]: Stage: fetch-offline Feb 13 19:02:23.021687 ignition[672]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:23.021695 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:23.021860 ignition[672]: parsed url from cmdline: "" Feb 13 19:02:23.021863 ignition[672]: no config URL provided Feb 13 19:02:23.021868 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:02:23.021875 ignition[672]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:02:23.021899 ignition[672]: op(1): [started] loading QEMU firmware config module Feb 13 19:02:23.021905 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:02:23.033408 ignition[672]: op(1): [finished] loading QEMU firmware config module Feb 13 19:02:23.041416 systemd-networkd[767]: lo: Link UP Feb 13 19:02:23.041426 systemd-networkd[767]: lo: Gained carrier Feb 13 19:02:23.042448 systemd-networkd[767]: Enumeration completed Feb 13 19:02:23.043208 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:23.043211 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:23.045558 systemd-networkd[767]: eth0: Link UP Feb 13 19:02:23.045561 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:02:23.045567 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:23.045880 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:23.047373 systemd[1]: Reached target network.target - Network. Feb 13 19:02:23.066131 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:02:23.080289 ignition[672]: parsing config with SHA512: fbaab804c49ce798697038eddd222f6196e4802ace1858c7e106134301baa350109684aeb02723457869efd6a4fb7b50928bd43f58a6f17165e5ae24754e6b9f Feb 13 19:02:23.084924 unknown[672]: fetched base config from "system" Feb 13 19:02:23.084935 unknown[672]: fetched user config from "qemu" Feb 13 19:02:23.086665 ignition[672]: fetch-offline: fetch-offline passed Feb 13 19:02:23.086793 ignition[672]: Ignition finished successfully Feb 13 19:02:23.088339 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:23.090423 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:02:23.105237 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:02:23.117145 ignition[774]: Ignition 2.20.0 Feb 13 19:02:23.117158 ignition[774]: Stage: kargs Feb 13 19:02:23.117319 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:23.117329 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:23.118240 ignition[774]: kargs: kargs passed Feb 13 19:02:23.118284 ignition[774]: Ignition finished successfully Feb 13 19:02:23.120876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:02:23.130259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:02:23.143186 ignition[783]: Ignition 2.20.0 Feb 13 19:02:23.143197 ignition[783]: Stage: disks Feb 13 19:02:23.143361 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:23.143371 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:23.144243 ignition[783]: disks: disks passed Feb 13 19:02:23.146234 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:02:23.144289 ignition[783]: Ignition finished successfully Feb 13 19:02:23.147648 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:23.148695 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:02:23.150117 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:23.151335 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:23.152834 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:23.164289 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:02:23.173590 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.42 Feb 13 19:02:23.173607 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Feb 13 19:02:23.176021 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:02:23.178245 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:02:23.194240 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:02:23.238112 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:02:23.238686 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:02:23.239844 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:23.252185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:23.253867 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:02:23.255242 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:02:23.255286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:02:23.261197 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:02:23.255311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:23.265158 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:23.265182 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:23.265193 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:23.262198 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:02:23.266357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:02:23.269033 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:23.270264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:23.313316 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:02:23.317445 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:02:23.321695 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:02:23.326512 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:02:23.401218 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:23.420217 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:02:23.422957 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:02:23.428112 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:23.446872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:02:23.450133 ignition[915]: INFO : Ignition 2.20.0 Feb 13 19:02:23.450133 ignition[915]: INFO : Stage: mount Feb 13 19:02:23.450133 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:23.450133 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:23.453141 ignition[915]: INFO : mount: mount passed Feb 13 19:02:23.453141 ignition[915]: INFO : Ignition finished successfully Feb 13 19:02:23.453149 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:02:23.465278 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:02:23.891785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:02:23.905321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:23.911100 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 19:02:23.913347 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:23.913368 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:23.913378 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:23.915095 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:23.916440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:23.943660 ignition[947]: INFO : Ignition 2.20.0 Feb 13 19:02:23.943660 ignition[947]: INFO : Stage: files Feb 13 19:02:23.945021 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:23.945021 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:23.945021 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:02:23.947951 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:02:23.947951 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:02:23.950044 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:02:23.950044 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:02:23.950044 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:02:23.950044 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:02:23.950044 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:23.948523 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 19:02:24.004214 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:02:24.625267 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:02:25.090995 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:02:25.090995 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:25.095346 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:25.417537 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:02:25.479479 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:25.480978 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:02:25.719250 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:02:25.983546 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:25.983546 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:02:25.986249 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:26.015311 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:26.018806 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:26.020909 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:26.020909 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:26.020909 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:26.020909 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:26.020909 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:26.020909 ignition[947]: INFO : files: files passed Feb 13 19:02:26.020909 ignition[947]: INFO : Ignition finished successfully Feb 13 19:02:26.021430 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:02:26.033275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:02:26.035628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:02:26.038436 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:02:26.038550 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:02:26.044152 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:02:26.047329 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:26.047329 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:26.050208 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:26.049758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:26.051525 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:02:26.069361 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:02:26.091933 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:02:26.092060 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:02:26.094310 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:02:26.095778 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:02:26.097333 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:02:26.098158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:02:26.114161 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:26.120253 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:02:26.128541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:26.129624 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:26.131257 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:02:26.133719 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:02:26.133853 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:26.136131 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:02:26.136970 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:02:26.138701 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:02:26.141110 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:26.142686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:26.144208 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:02:26.146150 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:26.147961 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:02:26.149490 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:02:26.150944 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:02:26.152124 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:02:26.152264 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:26.154042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:26.155532 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:26.156989 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:02:26.161147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:26.162139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:02:26.162266 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:26.164739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:02:26.164869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:26.166391 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:02:26.167584 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:02:26.168421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:26.170103 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:02:26.171543 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:02:26.173165 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:02:26.173264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:26.174468 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:02:26.174554 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:26.175842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:02:26.175965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:26.177236 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:02:26.177349 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:02:26.188309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:02:26.189031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:02:26.189193 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:26.191408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:02:26.192658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:02:26.192788 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:26.194137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:02:26.194251 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:26.199685 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:02:26.200699 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:02:26.203134 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 19:02:26.203134 ignition[1002]: INFO : Stage: umount Feb 13 19:02:26.205533 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:26.205533 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:26.205533 ignition[1002]: INFO : umount: umount passed Feb 13 19:02:26.205533 ignition[1002]: INFO : Ignition finished successfully Feb 13 19:02:26.206238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:02:26.206808 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:02:26.206896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:02:26.208495 systemd[1]: Stopped target network.target - Network. Feb 13 19:02:26.209859 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:02:26.209939 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:02:26.211344 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:02:26.211386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:02:26.212970 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:02:26.213021 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:02:26.213879 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:02:26.213922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:26.215360 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:02:26.216797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:02:26.225856 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:02:26.226045 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:02:26.229695 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:02:26.229958 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:02:26.230053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:02:26.232939 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:02:26.233707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:02:26.233757 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:26.245200 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:02:26.245927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:02:26.245989 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:26.247625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:26.247667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:26.250765 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:02:26.250813 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:26.251686 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:02:26.251727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:26.253794 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:26.256620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:02:26.256684 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:26.262615 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:02:26.262727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:02:26.272818 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:02:26.272962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:26.274871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:02:26.274912 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:26.276242 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:02:26.276272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:26.277655 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:02:26.277703 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:26.279852 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:02:26.279900 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:26.282190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:26.282238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:26.299299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:02:26.300089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:02:26.300157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:26.301879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:26.301928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:26.304193 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:02:26.304251 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:26.304546 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:02:26.304635 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:02:26.305540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:02:26.305615 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:02:26.307599 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:02:26.308917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:02:26.308995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:26.311518 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:02:26.322777 systemd[1]: Switching root. Feb 13 19:02:26.349252 systemd-journald[240]: Journal stopped Feb 13 19:02:27.210399 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Feb 13 19:02:27.210471 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:02:27.210485 kernel: SELinux: policy capability open_perms=1 Feb 13 19:02:27.210495 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:02:27.210504 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:02:27.210514 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:02:27.210523 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:02:27.210532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:02:27.210542 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:02:27.210552 kernel: audit: type=1403 audit(1739473346.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:02:27.210564 systemd[1]: Successfully loaded SELinux policy in 36.354ms. Feb 13 19:02:27.210587 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.114ms. Feb 13 19:02:27.210598 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:27.210609 systemd[1]: Detected virtualization kvm. Feb 13 19:02:27.210619 systemd[1]: Detected architecture arm64. Feb 13 19:02:27.210632 systemd[1]: Detected first boot. Feb 13 19:02:27.210642 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:27.210652 zram_generator::config[1049]: No configuration found. Feb 13 19:02:27.210664 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:02:27.210677 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:02:27.210689 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:02:27.210699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:02:27.210710 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:02:27.210720 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:27.210730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:02:27.210741 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:02:27.210751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:02:27.210765 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:02:27.210776 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:02:27.210786 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:02:27.210797 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:02:27.210807 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:02:27.210817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:27.210828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:27.210838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:02:27.210851 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:02:27.210863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:02:27.210874 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:27.210884 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:02:27.210894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:27.210905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:02:27.210916 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:02:27.210926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:27.210938 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:02:27.210949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:27.210959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:27.210969 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:27.210979 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:27.210990 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:02:27.211000 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:02:27.211010 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:02:27.211020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:27.211032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:27.211042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:27.211053 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:02:27.211063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:02:27.211073 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:02:27.211093 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:02:27.211105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:02:27.211116 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:02:27.211126 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:02:27.211139 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:02:27.211150 systemd[1]: Reached target machines.target - Containers. Feb 13 19:02:27.211160 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:02:27.211170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:27.211181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:27.211191 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:02:27.211202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:27.211212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:27.211224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:27.211234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:02:27.211244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:27.211255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:02:27.211265 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:02:27.211275 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:02:27.211285 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:02:27.211296 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:02:27.211306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:27.211320 kernel: fuse: init (API version 7.39) Feb 13 19:02:27.211329 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:27.211340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:27.211350 kernel: loop: module loaded Feb 13 19:02:27.211359 kernel: ACPI: bus type drm_connector registered Feb 13 19:02:27.211369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:02:27.211379 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:02:27.211389 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:02:27.211399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:27.211411 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:02:27.211422 systemd[1]: Stopped verity-setup.service. Feb 13 19:02:27.211432 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:02:27.211442 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:02:27.211483 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 19:02:27.211509 systemd-journald[1124]: Journal started Feb 13 19:02:27.211531 systemd-journald[1124]: Runtime Journal (/run/log/journal/3a010ff1acf944f2be1345bbcc8c8895) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:02:26.995442 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:02:27.005987 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:02:27.006376 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:02:27.215259 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:27.215966 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:02:27.216913 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:02:27.217927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:02:27.219669 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:02:27.220779 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:02:27.222032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:27.223367 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:02:27.223555 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:02:27.224721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:27.224914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:27.226303 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:27.226495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:27.227772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:27.229119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:27.230322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:02:27.230486 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:02:27.231828 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:27.232004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:27.233263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:27.234494 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:02:27.237472 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:02:27.238794 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:02:27.251800 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:02:27.265259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:02:27.267206 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:02:27.268021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:02:27.268062 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:27.269961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:02:27.272030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:02:27.273983 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:02:27.274903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:27.276013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:02:27.277855 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:02:27.278789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:27.280249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:02:27.281224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:27.283287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:27.295303 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:02:27.297640 systemd-journald[1124]: Time spent on flushing to /var/log/journal/3a010ff1acf944f2be1345bbcc8c8895 is 16.724ms for 874 entries. Feb 13 19:02:27.297640 systemd-journald[1124]: System Journal (/var/log/journal/3a010ff1acf944f2be1345bbcc8c8895) is 8M, max 195.6M, 187.6M free. Feb 13 19:02:27.322900 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 19:02:27.322946 kernel: loop0: detected capacity change from 0 to 201592 Feb 13 19:02:27.322959 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:02:27.298946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:02:27.301819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:27.303268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:02:27.311938 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:02:27.316184 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:02:27.317546 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:02:27.319571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:27.324304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:02:27.327971 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:02:27.337303 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:02:27.343330 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:02:27.351602 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:02:27.357141 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:02:27.362355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:27.363971 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:02:27.366295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:02:27.367227 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:02:27.386854 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Feb 13 19:02:27.386870 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Feb 13 19:02:27.391438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:27.413111 kernel: loop2: detected capacity change from 0 to 123192 Feb 13 19:02:27.461110 kernel: loop3: detected capacity change from 0 to 201592 Feb 13 19:02:27.474349 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 19:02:27.479736 kernel: loop5: detected capacity change from 0 to 123192 Feb 13 19:02:27.482334 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:02:27.482751 (sd-merge)[1191]: Merged extensions into '/usr'. Feb 13 19:02:27.485932 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:02:27.485951 systemd[1]: Reloading... Feb 13 19:02:27.548815 zram_generator::config[1218]: No configuration found. Feb 13 19:02:27.576359 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:02:27.644903 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:27.693747 systemd[1]: Reloading finished in 207 ms. Feb 13 19:02:27.711814 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:02:27.713273 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:02:27.731373 systemd[1]: Starting ensure-sysext.service... Feb 13 19:02:27.733205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:27.746017 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:02:27.746032 systemd[1]: Reloading... Feb 13 19:02:27.749924 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:02:27.750511 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:02:27.751302 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:02:27.751656 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:02:27.751783 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:02:27.754500 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:27.754615 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:02:27.763638 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:27.763757 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:02:27.794108 zram_generator::config[1284]: No configuration found. Feb 13 19:02:27.871213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:27.921145 systemd[1]: Reloading finished in 174 ms. Feb 13 19:02:27.936734 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:02:27.953153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:27.961066 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:27.963734 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:02:27.966207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:02:27.971438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:27.978406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:27.983026 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:02:27.990054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:27.992383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:27.994483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:27.997470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:27.998571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:27.998686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:28.002553 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:02:28.006156 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:02:28.008930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:28.010422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:28.012203 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:28.012357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:28.015028 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Feb 13 19:02:28.015913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:28.016068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:28.021994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:28.028463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:28.032404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:28.036464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:28.037356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:28.037493 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:28.041333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:02:28.045121 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:02:28.046734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:28.048333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:28.048544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:28.049896 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:02:28.051922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:28.052073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:28.053268 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:02:28.054613 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:28.054758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:28.058239 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:02:28.072985 augenrules[1386]: No rules Feb 13 19:02:28.080354 systemd[1]: Finished ensure-sysext.service. Feb 13 19:02:28.081387 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:28.081573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:28.106116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1357) Feb 13 19:02:28.122430 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:02:28.131915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:28.138261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:28.139772 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 19:02:28.141602 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:28.141638 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:28.141996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:28.144183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:28.147248 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:28.148118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:28.148161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:28.149617 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 19:02:28.150861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:28.153373 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:02:28.154215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:02:28.154595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:28.157051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:28.157274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:28.158348 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:28.158525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:28.159563 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:28.159729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:28.162803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:28.162958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:28.170719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:02:28.172520 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:28.176755 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:02:28.179086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:28.179161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:28.196128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:02:28.226328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:28.229912 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:02:28.231038 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:02:28.231815 systemd-networkd[1400]: lo: Link UP Feb 13 19:02:28.231828 systemd-networkd[1400]: lo: Gained carrier Feb 13 19:02:28.232804 systemd-networkd[1400]: Enumeration completed Feb 13 19:02:28.233492 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:28.234641 systemd[1]: Reached target network.target - Network. Feb 13 19:02:28.235180 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:28.235188 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:28.235635 systemd-networkd[1400]: eth0: Link UP Feb 13 19:02:28.235641 systemd-networkd[1400]: eth0: Gained carrier Feb 13 19:02:28.235652 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:28.236750 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:02:28.238557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:02:28.239786 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:02:28.244829 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:02:28.249153 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:02:28.250257 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Feb 13 19:02:28.251269 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:02:28.251421 systemd-timesyncd[1401]: Initial clock synchronization to Thu 2025-02-13 19:02:28.036248 UTC. Feb 13 19:02:28.255017 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:02:28.260827 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:28.271367 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:28.293599 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:02:28.294781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:28.297170 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:28.298042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:02:28.298999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:02:28.300124 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:02:28.300982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:02:28.302106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:02:28.302962 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:02:28.302994 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:28.303666 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:28.305053 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:02:28.307364 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:02:28.310549 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:02:28.311639 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:02:28.313671 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:02:28.316742 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:02:28.318245 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:02:28.320193 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:02:28.321580 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:02:28.322441 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:28.323141 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:28.323832 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:28.323861 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:28.324802 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:02:28.326646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:02:28.329257 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:28.330223 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:02:28.335356 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:02:28.336150 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:02:28.338041 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:02:28.342112 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:02:28.344111 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:02:28.347103 jq[1435]: false Feb 13 19:02:28.346716 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:02:28.350711 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:02:28.352742 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:02:28.353307 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:02:28.359424 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:02:28.361433 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:02:28.362822 dbus-daemon[1434]: [system] SELinux support is enabled Feb 13 19:02:28.371916 extend-filesystems[1436]: Found loop3 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found loop4 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found loop5 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda1 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda2 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda3 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found usr Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda4 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda6 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda7 Feb 13 19:02:28.371916 extend-filesystems[1436]: Found vda9 Feb 13 19:02:28.371916 extend-filesystems[1436]: Checking size of /dev/vda9 Feb 13 19:02:28.363137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:02:28.367126 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:02:28.400713 jq[1450]: true Feb 13 19:02:28.369608 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:02:28.371179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:02:28.402959 tar[1454]: linux-arm64/LICENSE Feb 13 19:02:28.402959 tar[1454]: linux-arm64/helm Feb 13 19:02:28.373036 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:02:28.373276 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:02:28.378487 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:02:28.379221 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:02:28.384852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:02:28.384897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:02:28.387311 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:02:28.387333 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:02:28.394903 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:02:28.409722 extend-filesystems[1436]: Resized partition /dev/vda9 Feb 13 19:02:28.411357 jq[1465]: true Feb 13 19:02:28.415111 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:02:28.423578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1356) Feb 13 19:02:28.423634 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:02:28.425541 update_engine[1445]: I20250213 19:02:28.425328 1445 main.cc:92] Flatcar Update Engine starting Feb 13 19:02:28.430391 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:02:28.434614 update_engine[1445]: I20250213 19:02:28.433363 1445 update_check_scheduler.cc:74] Next update check in 11m49s Feb 13 19:02:28.436939 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:02:28.437282 systemd-logind[1444]: New seat seat0. Feb 13 19:02:28.438286 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:02:28.439810 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:02:28.448104 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:02:28.466425 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:02:28.466425 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:02:28.466425 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:02:28.473683 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Feb 13 19:02:28.468415 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:02:28.469494 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:02:28.520511 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:02:28.523184 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:28.524872 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:02:28.526683 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:02:28.660786 containerd[1457]: time="2025-02-13T19:02:28.660632720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:02:28.696905 containerd[1457]: time="2025-02-13T19:02:28.696840520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698507 containerd[1457]: time="2025-02-13T19:02:28.698433600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698507 containerd[1457]: time="2025-02-13T19:02:28.698481640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:02:28.698507 containerd[1457]: time="2025-02-13T19:02:28.698499760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:02:28.698708 containerd[1457]: time="2025-02-13T19:02:28.698679080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:02:28.698708 containerd[1457]: time="2025-02-13T19:02:28.698702880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698772 containerd[1457]: time="2025-02-13T19:02:28.698759680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698791 containerd[1457]: time="2025-02-13T19:02:28.698772440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698999 containerd[1457]: time="2025-02-13T19:02:28.698968240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:28.698999 containerd[1457]: time="2025-02-13T19:02:28.698991240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699046 containerd[1457]: time="2025-02-13T19:02:28.699004520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699046 containerd[1457]: time="2025-02-13T19:02:28.699014280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699208 containerd[1457]: time="2025-02-13T19:02:28.699100800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699332 containerd[1457]: time="2025-02-13T19:02:28.699309320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699466 containerd[1457]: time="2025-02-13T19:02:28.699438520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:28.699534 containerd[1457]: time="2025-02-13T19:02:28.699451680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:02:28.699607 containerd[1457]: time="2025-02-13T19:02:28.699591400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:02:28.699655 containerd[1457]: time="2025-02-13T19:02:28.699644360Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:02:28.714159 containerd[1457]: time="2025-02-13T19:02:28.714094160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:02:28.714159 containerd[1457]: time="2025-02-13T19:02:28.714166880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:02:28.714313 containerd[1457]: time="2025-02-13T19:02:28.714184320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:02:28.714313 containerd[1457]: time="2025-02-13T19:02:28.714200240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:02:28.714313 containerd[1457]: time="2025-02-13T19:02:28.714215480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:02:28.714479 containerd[1457]: time="2025-02-13T19:02:28.714392360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:02:28.715701 containerd[1457]: time="2025-02-13T19:02:28.715639600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:02:28.715841 containerd[1457]: time="2025-02-13T19:02:28.715818440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:02:28.715872 containerd[1457]: time="2025-02-13T19:02:28.715847960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:02:28.715941 containerd[1457]: time="2025-02-13T19:02:28.715869600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:02:28.715941 containerd[1457]: time="2025-02-13T19:02:28.715890360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.715941 containerd[1457]: time="2025-02-13T19:02:28.715907880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.715941 containerd[1457]: time="2025-02-13T19:02:28.715924200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716015 containerd[1457]: time="2025-02-13T19:02:28.715942480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716015 containerd[1457]: time="2025-02-13T19:02:28.715961840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716015 containerd[1457]: time="2025-02-13T19:02:28.715980240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716015 containerd[1457]: time="2025-02-13T19:02:28.715993560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716015 containerd[1457]: time="2025-02-13T19:02:28.716009600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716040120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716054840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716070680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716104640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716122360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716139480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716165 containerd[1457]: time="2025-02-13T19:02:28.716155440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716172760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716190120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716210080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716228680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716244320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716257760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716283600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716312320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716331240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716347 containerd[1457]: time="2025-02-13T19:02:28.716346240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:02:28.716572 containerd[1457]: time="2025-02-13T19:02:28.716543120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:02:28.716572 containerd[1457]: time="2025-02-13T19:02:28.716569200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:02:28.716572 containerd[1457]: time="2025-02-13T19:02:28.716583400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:02:28.716572 containerd[1457]: time="2025-02-13T19:02:28.716599280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:02:28.716572 containerd[1457]: time="2025-02-13T19:02:28.716612160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.716846 containerd[1457]: time="2025-02-13T19:02:28.716628640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:02:28.716846 containerd[1457]: time="2025-02-13T19:02:28.716640600Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:02:28.716846 containerd[1457]: time="2025-02-13T19:02:28.716658040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:02:28.717076 containerd[1457]: time="2025-02-13T19:02:28.717027560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:02:28.717201 containerd[1457]: time="2025-02-13T19:02:28.717098720Z" level=info msg="Connect containerd service" Feb 13 19:02:28.717201 containerd[1457]: time="2025-02-13T19:02:28.717137280Z" level=info msg="using legacy CRI server" Feb 13 19:02:28.717201 containerd[1457]: time="2025-02-13T19:02:28.717145120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:02:28.717413 containerd[1457]: time="2025-02-13T19:02:28.717391040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:02:28.718791 containerd[1457]: time="2025-02-13T19:02:28.718756480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:02:28.719008 containerd[1457]: time="2025-02-13T19:02:28.718978280Z" level=info msg="Start subscribing containerd event" Feb 13 19:02:28.719038 containerd[1457]: time="2025-02-13T19:02:28.719026120Z" level=info msg="Start recovering state" Feb 13 19:02:28.719139 containerd[1457]: time="2025-02-13T19:02:28.719107360Z" level=info msg="Start event monitor" Feb 13 19:02:28.719139 containerd[1457]: time="2025-02-13T19:02:28.719125520Z" level=info msg="Start snapshots syncer" Feb 13 19:02:28.719139 containerd[1457]: time="2025-02-13T19:02:28.719134640Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:02:28.719200 containerd[1457]: time="2025-02-13T19:02:28.719141400Z" level=info msg="Start streaming server" Feb 13 19:02:28.719749 containerd[1457]: time="2025-02-13T19:02:28.719721520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:02:28.719794 containerd[1457]: time="2025-02-13T19:02:28.719765480Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:02:28.722251 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:02:28.723191 containerd[1457]: time="2025-02-13T19:02:28.723131480Z" level=info msg="containerd successfully booted in 0.063642s" Feb 13 19:02:28.837686 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:02:28.852384 tar[1454]: linux-arm64/README.md Feb 13 19:02:28.864993 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:02:28.867468 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:02:28.871059 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:02:28.881000 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:02:28.881301 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:02:28.884571 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:02:28.896879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:02:28.900317 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:02:28.903175 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:02:28.904481 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:02:29.873221 systemd-networkd[1400]: eth0: Gained IPv6LL Feb 13 19:02:29.874909 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:02:29.879202 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:02:29.893405 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:02:29.895679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:29.897749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:02:29.913177 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:02:29.913430 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:02:29.916196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:02:29.921274 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:02:30.429540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:30.431319 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:30.436843 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:30.441325 systemd[1]: Startup finished in 549ms (kernel) + 5.817s (initrd) + 3.959s (userspace) = 10.327s. Feb 13 19:02:30.856142 kubelet[1546]: E0213 19:02:30.855997 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:30.858766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:30.858913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:30.861170 systemd[1]: kubelet.service: Consumed 804ms CPU time, 250.5M memory peak. Feb 13 19:02:33.064309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:02:33.065773 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:59774.service - OpenSSH per-connection server daemon (10.0.0.1:59774). Feb 13 19:02:33.137195 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 59774 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:33.139148 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.145372 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:02:33.153386 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:02:33.158749 systemd-logind[1444]: New session 1 of user core. Feb 13 19:02:33.162913 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:02:33.166202 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:02:33.173104 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:02:33.175404 systemd-logind[1444]: New session c1 of user core. Feb 13 19:02:33.280794 systemd[1564]: Queued start job for default target default.target. Feb 13 19:02:33.299166 systemd[1564]: Created slice app.slice - User Application Slice. Feb 13 19:02:33.299201 systemd[1564]: Reached target paths.target - Paths. Feb 13 19:02:33.299241 systemd[1564]: Reached target timers.target - Timers. Feb 13 19:02:33.300656 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:02:33.314195 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:02:33.314319 systemd[1564]: Reached target sockets.target - Sockets. Feb 13 19:02:33.314362 systemd[1564]: Reached target basic.target - Basic System. Feb 13 19:02:33.314392 systemd[1564]: Reached target default.target - Main User Target. Feb 13 19:02:33.314418 systemd[1564]: Startup finished in 132ms. Feb 13 19:02:33.314667 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:02:33.317657 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:02:33.384752 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:59782.service - OpenSSH per-connection server daemon (10.0.0.1:59782). Feb 13 19:02:33.433122 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 59782 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:33.434468 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.439602 systemd-logind[1444]: New session 2 of user core. Feb 13 19:02:33.448310 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:02:33.502135 sshd[1577]: Connection closed by 10.0.0.1 port 59782 Feb 13 19:02:33.501252 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:33.518572 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:59782.service: Deactivated successfully. Feb 13 19:02:33.522553 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:02:33.526170 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:02:33.540467 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:59798.service - OpenSSH per-connection server daemon (10.0.0.1:59798). Feb 13 19:02:33.545076 systemd-logind[1444]: Removed session 2. Feb 13 19:02:33.596615 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 59798 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:33.597878 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.602621 systemd-logind[1444]: New session 3 of user core. Feb 13 19:02:33.613272 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:02:33.662617 sshd[1585]: Connection closed by 10.0.0.1 port 59798 Feb 13 19:02:33.663244 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:33.676301 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:59798.service: Deactivated successfully. Feb 13 19:02:33.677833 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:02:33.678512 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:02:33.692436 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). Feb 13 19:02:33.693732 systemd-logind[1444]: Removed session 3. Feb 13 19:02:33.730423 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:33.732019 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.737100 systemd-logind[1444]: New session 4 of user core. Feb 13 19:02:33.751264 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:02:33.803698 sshd[1593]: Connection closed by 10.0.0.1 port 59806 Feb 13 19:02:33.804179 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:33.821620 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:59806.service: Deactivated successfully. Feb 13 19:02:33.823595 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:02:33.825772 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:02:33.835511 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:59818.service - OpenSSH per-connection server daemon (10.0.0.1:59818). Feb 13 19:02:33.836847 systemd-logind[1444]: Removed session 4. Feb 13 19:02:33.873382 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:33.875122 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.879520 systemd-logind[1444]: New session 5 of user core. Feb 13 19:02:33.889265 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:02:33.951486 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:02:33.951767 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:33.973385 sudo[1602]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:33.975105 sshd[1601]: Connection closed by 10.0.0.1 port 59818 Feb 13 19:02:33.975457 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:33.989421 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:59818.service: Deactivated successfully. Feb 13 19:02:33.991150 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:02:33.992915 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:02:34.015456 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:59828.service - OpenSSH per-connection server daemon (10.0.0.1:59828). Feb 13 19:02:34.016380 systemd-logind[1444]: Removed session 5. Feb 13 19:02:34.051949 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 59828 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:34.053303 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:34.057154 systemd-logind[1444]: New session 6 of user core. Feb 13 19:02:34.064266 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:02:34.116343 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:02:34.116642 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:34.119931 sudo[1612]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:34.124977 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:02:34.125299 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:34.142486 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:34.166910 augenrules[1634]: No rules Feb 13 19:02:34.168364 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:34.168608 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:34.169848 sudo[1611]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:34.171193 sshd[1610]: Connection closed by 10.0.0.1 port 59828 Feb 13 19:02:34.171664 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:34.184327 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:59828.service: Deactivated successfully. Feb 13 19:02:34.186639 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:02:34.187977 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:02:34.198395 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:59840.service - OpenSSH per-connection server daemon (10.0.0.1:59840). Feb 13 19:02:34.199458 systemd-logind[1444]: Removed session 6. Feb 13 19:02:34.242252 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 59840 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:02:34.243515 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:34.248288 systemd-logind[1444]: New session 7 of user core. Feb 13 19:02:34.258257 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:02:34.310607 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:02:34.310907 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:34.666463 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:02:34.666564 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:02:34.926584 dockerd[1665]: time="2025-02-13T19:02:34.926430315Z" level=info msg="Starting up" Feb 13 19:02:35.300632 dockerd[1665]: time="2025-02-13T19:02:35.300521290Z" level=info msg="Loading containers: start." Feb 13 19:02:35.452234 kernel: Initializing XFRM netlink socket Feb 13 19:02:35.518979 systemd-networkd[1400]: docker0: Link UP Feb 13 19:02:35.556771 dockerd[1665]: time="2025-02-13T19:02:35.556473925Z" level=info msg="Loading containers: done." Feb 13 19:02:35.575214 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2428045744-merged.mount: Deactivated successfully. Feb 13 19:02:35.578556 dockerd[1665]: time="2025-02-13T19:02:35.578494231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:02:35.578688 dockerd[1665]: time="2025-02-13T19:02:35.578605869Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:02:35.578821 dockerd[1665]: time="2025-02-13T19:02:35.578782315Z" level=info msg="Daemon has completed initialization" Feb 13 19:02:35.621988 dockerd[1665]: time="2025-02-13T19:02:35.621861542Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:02:35.622088 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:02:36.179543 containerd[1457]: time="2025-02-13T19:02:36.179505999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:02:36.802245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674113762.mount: Deactivated successfully. Feb 13 19:02:38.216143 containerd[1457]: time="2025-02-13T19:02:38.215776226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:38.229539 containerd[1457]: time="2025-02-13T19:02:38.229467603Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 19:02:38.302436 containerd[1457]: time="2025-02-13T19:02:38.302398608Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:38.306609 containerd[1457]: time="2025-02-13T19:02:38.306564996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:38.308214 containerd[1457]: time="2025-02-13T19:02:38.308157308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.128220478s" Feb 13 19:02:38.308264 containerd[1457]: time="2025-02-13T19:02:38.308224362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:02:38.309611 containerd[1457]: time="2025-02-13T19:02:38.309574487Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:02:39.663231 containerd[1457]: time="2025-02-13T19:02:39.663157746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.663975 containerd[1457]: time="2025-02-13T19:02:39.663937515Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 19:02:39.664671 containerd[1457]: time="2025-02-13T19:02:39.664638525Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.667745 containerd[1457]: time="2025-02-13T19:02:39.667695324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.669222 containerd[1457]: time="2025-02-13T19:02:39.669184562Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.359572255s" Feb 13 19:02:39.669222 containerd[1457]: time="2025-02-13T19:02:39.669221460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:02:39.670106 containerd[1457]: time="2025-02-13T19:02:39.670084595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:02:41.109482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:41.118306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:41.218767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:41.223150 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:41.264453 kubelet[1933]: E0213 19:02:41.264392 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:41.267699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:41.267850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:41.268320 systemd[1]: kubelet.service: Consumed 137ms CPU time, 102.5M memory peak. Feb 13 19:02:41.321775 containerd[1457]: time="2025-02-13T19:02:41.321722775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:41.322251 containerd[1457]: time="2025-02-13T19:02:41.322206941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 19:02:41.325520 containerd[1457]: time="2025-02-13T19:02:41.325458886Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:41.328453 containerd[1457]: time="2025-02-13T19:02:41.328412654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:41.329809 containerd[1457]: time="2025-02-13T19:02:41.329770227Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.659656862s" Feb 13 19:02:41.329875 containerd[1457]: time="2025-02-13T19:02:41.329810727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:02:41.330441 containerd[1457]: time="2025-02-13T19:02:41.330412373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:02:42.447008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285485417.mount: Deactivated successfully. Feb 13 19:02:42.798964 containerd[1457]: time="2025-02-13T19:02:42.798782348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.799795 containerd[1457]: time="2025-02-13T19:02:42.799722117Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 19:02:42.803095 containerd[1457]: time="2025-02-13T19:02:42.800371865Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.805352 containerd[1457]: time="2025-02-13T19:02:42.805306547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.806218 containerd[1457]: time="2025-02-13T19:02:42.806179673Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.475729102s" Feb 13 19:02:42.806260 containerd[1457]: time="2025-02-13T19:02:42.806219443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:02:42.807022 containerd[1457]: time="2025-02-13T19:02:42.806983846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:02:43.537742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682046268.mount: Deactivated successfully. Feb 13 19:02:44.374829 containerd[1457]: time="2025-02-13T19:02:44.374765627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.375376 containerd[1457]: time="2025-02-13T19:02:44.375317140Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 19:02:44.376237 containerd[1457]: time="2025-02-13T19:02:44.376202678Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.379194 containerd[1457]: time="2025-02-13T19:02:44.379161392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.380452 containerd[1457]: time="2025-02-13T19:02:44.380421327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.573401205s" Feb 13 19:02:44.380491 containerd[1457]: time="2025-02-13T19:02:44.380464171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:02:44.380921 containerd[1457]: time="2025-02-13T19:02:44.380888069Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:02:44.847840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667829566.mount: Deactivated successfully. Feb 13 19:02:44.851815 containerd[1457]: time="2025-02-13T19:02:44.851769998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.852533 containerd[1457]: time="2025-02-13T19:02:44.852489420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:02:44.853110 containerd[1457]: time="2025-02-13T19:02:44.853072778Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.855565 containerd[1457]: time="2025-02-13T19:02:44.855529519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.856633 containerd[1457]: time="2025-02-13T19:02:44.856599625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.673535ms" Feb 13 19:02:44.856675 containerd[1457]: time="2025-02-13T19:02:44.856632346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:02:44.857453 containerd[1457]: time="2025-02-13T19:02:44.857414739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:02:45.396150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778294536.mount: Deactivated successfully. Feb 13 19:02:47.607745 containerd[1457]: time="2025-02-13T19:02:47.607680109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.608293 containerd[1457]: time="2025-02-13T19:02:47.608234958Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 19:02:47.609176 containerd[1457]: time="2025-02-13T19:02:47.609139955Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.613057 containerd[1457]: time="2025-02-13T19:02:47.612994889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.614958 containerd[1457]: time="2025-02-13T19:02:47.614912022Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.757461802s" Feb 13 19:02:47.614958 containerd[1457]: time="2025-02-13T19:02:47.614955835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:02:51.503640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:51.518298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:51.619325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:51.623186 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:51.658115 kubelet[2091]: E0213 19:02:51.657608 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:51.659894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:51.660042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:51.660357 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.5M memory peak. Feb 13 19:02:52.343950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:52.344105 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.5M memory peak. Feb 13 19:02:52.354348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:52.377346 systemd[1]: Reload requested from client PID 2106 ('systemctl') (unit session-7.scope)... Feb 13 19:02:52.377365 systemd[1]: Reloading... Feb 13 19:02:52.453115 zram_generator::config[2156]: No configuration found. Feb 13 19:02:52.724751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:52.798070 systemd[1]: Reloading finished in 420 ms. Feb 13 19:02:52.838473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:52.840361 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:52.840573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:52.840621 systemd[1]: kubelet.service: Consumed 85ms CPU time, 90.2M memory peak. Feb 13 19:02:52.844215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:52.941678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:52.946379 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:52.982564 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:52.982564 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:52.982564 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:52.982564 kubelet[2197]: I0213 19:02:52.982547 2197 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:53.629146 kubelet[2197]: I0213 19:02:53.628695 2197 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:02:53.629146 kubelet[2197]: I0213 19:02:53.628732 2197 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:53.629425 kubelet[2197]: I0213 19:02:53.629407 2197 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:02:53.665617 kubelet[2197]: E0213 19:02:53.665577 2197 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:53.669075 kubelet[2197]: I0213 19:02:53.669017 2197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:53.679541 kubelet[2197]: E0213 19:02:53.679479 2197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:53.679541 kubelet[2197]: I0213 19:02:53.679537 2197 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:53.682862 kubelet[2197]: I0213 19:02:53.682831 2197 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:53.683117 kubelet[2197]: I0213 19:02:53.683067 2197 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:53.683293 kubelet[2197]: I0213 19:02:53.683116 2197 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:53.683401 kubelet[2197]: I0213 19:02:53.683364 2197 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:53.683401 kubelet[2197]: I0213 19:02:53.683374 2197 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:02:53.683593 kubelet[2197]: I0213 19:02:53.683568 2197 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:53.689680 kubelet[2197]: I0213 19:02:53.689652 2197 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:02:53.689680 kubelet[2197]: I0213 19:02:53.689680 2197 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:53.689787 kubelet[2197]: I0213 19:02:53.689701 2197 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:02:53.689787 kubelet[2197]: I0213 19:02:53.689712 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:53.693399 kubelet[2197]: W0213 19:02:53.693276 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:53.693399 kubelet[2197]: W0213 19:02:53.693293 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:53.693399 kubelet[2197]: E0213 19:02:53.693349 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:53.693399 kubelet[2197]: E0213 19:02:53.693350 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:53.694891 kubelet[2197]: I0213 19:02:53.693865 2197 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:53.694891 kubelet[2197]: I0213 19:02:53.694570 2197 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:53.694891 kubelet[2197]: W0213 19:02:53.694698 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:53.695922 kubelet[2197]: I0213 19:02:53.695901 2197 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:02:53.696033 kubelet[2197]: I0213 19:02:53.696021 2197 server.go:1287] "Started kubelet" Feb 13 19:02:53.696350 kubelet[2197]: I0213 19:02:53.696299 2197 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:53.697240 kubelet[2197]: I0213 19:02:53.697189 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:53.697635 kubelet[2197]: I0213 19:02:53.697614 2197 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:53.697745 kubelet[2197]: I0213 19:02:53.697243 2197 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:02:53.698679 kubelet[2197]: I0213 19:02:53.698645 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:53.700436 kubelet[2197]: I0213 19:02:53.700342 2197 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:02:53.700436 kubelet[2197]: I0213 19:02:53.700421 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:53.701490 kubelet[2197]: I0213 19:02:53.701456 2197 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:53.701567 kubelet[2197]: I0213 19:02:53.701512 2197 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:53.702134 kubelet[2197]: W0213 19:02:53.702051 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:53.702209 kubelet[2197]: E0213 19:02:53.702143 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:53.702259 kubelet[2197]: E0213 19:02:53.702238 2197 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:53.702922 kubelet[2197]: E0213 19:02:53.702561 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" Feb 13 19:02:53.706165 kubelet[2197]: E0213 19:02:53.702890 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9d137666411 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:53.695992849 +0000 UTC m=+0.746542579,LastTimestamp:2025-02-13 19:02:53.695992849 +0000 UTC m=+0.746542579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:53.706165 kubelet[2197]: E0213 19:02:53.705545 2197 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:02:53.708187 kubelet[2197]: I0213 19:02:53.708161 2197 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:53.708399 kubelet[2197]: I0213 19:02:53.708279 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:53.709561 kubelet[2197]: I0213 19:02:53.709533 2197 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:53.715586 kubelet[2197]: I0213 19:02:53.715542 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:53.716666 kubelet[2197]: I0213 19:02:53.716622 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:53.716666 kubelet[2197]: I0213 19:02:53.716650 2197 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:02:53.716727 kubelet[2197]: I0213 19:02:53.716675 2197 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:02:53.716727 kubelet[2197]: I0213 19:02:53.716683 2197 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:02:53.716782 kubelet[2197]: E0213 19:02:53.716732 2197 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:53.723336 kubelet[2197]: W0213 19:02:53.723266 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:53.723336 kubelet[2197]: E0213 19:02:53.723335 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:53.724234 kubelet[2197]: I0213 19:02:53.724217 2197 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:02:53.724234 kubelet[2197]: I0213 19:02:53.724231 2197 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:53.724348 kubelet[2197]: I0213 19:02:53.724250 2197 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:53.796987 kubelet[2197]: I0213 19:02:53.796946 2197 policy_none.go:49] "None policy: Start" Feb 13 19:02:53.796987 kubelet[2197]: I0213 19:02:53.796978 2197 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:02:53.796987 kubelet[2197]: I0213 19:02:53.796991 2197 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:53.802974 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:53.806681 kubelet[2197]: E0213 19:02:53.806648 2197 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:02:53.817258 kubelet[2197]: E0213 19:02:53.817233 2197 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:02:53.817532 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:53.826913 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:53.828191 kubelet[2197]: I0213 19:02:53.828128 2197 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:53.828718 kubelet[2197]: I0213 19:02:53.828339 2197 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:53.828718 kubelet[2197]: I0213 19:02:53.828359 2197 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:53.828718 kubelet[2197]: I0213 19:02:53.828638 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:53.830071 kubelet[2197]: E0213 19:02:53.830038 2197 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:02:53.830149 kubelet[2197]: E0213 19:02:53.830104 2197 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:02:53.903585 kubelet[2197]: E0213 19:02:53.903471 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" Feb 13 19:02:53.931457 kubelet[2197]: I0213 19:02:53.931121 2197 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:02:53.931697 kubelet[2197]: E0213 19:02:53.931670 2197 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 13 19:02:54.026531 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:02:54.041911 kubelet[2197]: E0213 19:02:54.041882 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:54.044900 systemd[1]: Created slice kubepods-burstable-pod76a4520f627f8a717d415e3b98fa1b95.slice - libcontainer container kubepods-burstable-pod76a4520f627f8a717d415e3b98fa1b95.slice. Feb 13 19:02:54.046917 kubelet[2197]: E0213 19:02:54.046883 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:54.048500 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:02:54.050158 kubelet[2197]: E0213 19:02:54.050136 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:54.104498 kubelet[2197]: I0213 19:02:54.104453 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:54.104498 kubelet[2197]: I0213 19:02:54.104485 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:54.104498 kubelet[2197]: I0213 19:02:54.104504 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:54.104665 kubelet[2197]: I0213 19:02:54.104521 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:54.104665 kubelet[2197]: I0213 19:02:54.104539 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:54.104665 kubelet[2197]: I0213 19:02:54.104556 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:54.104665 kubelet[2197]: I0213 19:02:54.104574 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:54.104665 kubelet[2197]: I0213 19:02:54.104590 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:54.104759 kubelet[2197]: I0213 19:02:54.104608 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:54.133575 kubelet[2197]: I0213 19:02:54.133542 2197 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:02:54.133892 kubelet[2197]: E0213 19:02:54.133843 2197 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 13 19:02:54.304361 kubelet[2197]: E0213 19:02:54.304320 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" Feb 13 19:02:54.342709 kubelet[2197]: E0213 19:02:54.342672 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.343472 containerd[1457]: time="2025-02-13T19:02:54.343371817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:54.348166 kubelet[2197]: E0213 19:02:54.347851 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.348668 containerd[1457]: time="2025-02-13T19:02:54.348516226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76a4520f627f8a717d415e3b98fa1b95,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:54.351018 kubelet[2197]: E0213 19:02:54.350775 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:54.351325 containerd[1457]: time="2025-02-13T19:02:54.351291856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:54.535220 kubelet[2197]: I0213 19:02:54.535193 2197 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:02:54.535553 kubelet[2197]: E0213 19:02:54.535518 2197 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 13 19:02:54.799743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353233104.mount: Deactivated successfully. Feb 13 19:02:54.804323 containerd[1457]: time="2025-02-13T19:02:54.804237993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:54.806195 containerd[1457]: time="2025-02-13T19:02:54.806155643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:02:54.806979 containerd[1457]: time="2025-02-13T19:02:54.806948805Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:54.808200 containerd[1457]: time="2025-02-13T19:02:54.808153615Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:54.808902 containerd[1457]: time="2025-02-13T19:02:54.808843796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:54.810259 containerd[1457]: time="2025-02-13T19:02:54.809798884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:54.810810 containerd[1457]: time="2025-02-13T19:02:54.810770677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:54.813497 containerd[1457]: time="2025-02-13T19:02:54.813461348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:54.815553 containerd[1457]: time="2025-02-13T19:02:54.815522380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 466.936341ms" Feb 13 19:02:54.816255 containerd[1457]: time="2025-02-13T19:02:54.816220714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.855448ms" Feb 13 19:02:54.816998 containerd[1457]: time="2025-02-13T19:02:54.816962086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.513382ms" Feb 13 19:02:54.925523 kubelet[2197]: W0213 19:02:54.925460 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:54.925672 kubelet[2197]: E0213 19:02:54.925528 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:54.925672 kubelet[2197]: W0213 19:02:54.925460 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:54.925672 kubelet[2197]: E0213 19:02:54.925568 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:54.934470 kubelet[2197]: W0213 19:02:54.934378 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:54.934470 kubelet[2197]: E0213 19:02:54.934439 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:54.977624 containerd[1457]: time="2025-02-13T19:02:54.977510892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:54.977880 containerd[1457]: time="2025-02-13T19:02:54.977810086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:54.977978 containerd[1457]: time="2025-02-13T19:02:54.977863035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:54.977978 containerd[1457]: time="2025-02-13T19:02:54.977879740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.978161 containerd[1457]: time="2025-02-13T19:02:54.977965977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.979675 containerd[1457]: time="2025-02-13T19:02:54.977593653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:54.980025 containerd[1457]: time="2025-02-13T19:02:54.979875155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.980025 containerd[1457]: time="2025-02-13T19:02:54.979971023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.981239 containerd[1457]: time="2025-02-13T19:02:54.981048354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:54.981239 containerd[1457]: time="2025-02-13T19:02:54.981119726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:54.981239 containerd[1457]: time="2025-02-13T19:02:54.981134792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:54.981385 containerd[1457]: time="2025-02-13T19:02:54.981340356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:55.000280 systemd[1]: Started cri-containerd-6e2915d2a565e66b5964394796c6cfd3156cf13a4cda2b81719facb33875a171.scope - libcontainer container 6e2915d2a565e66b5964394796c6cfd3156cf13a4cda2b81719facb33875a171. Feb 13 19:02:55.004530 systemd[1]: Started cri-containerd-0b60c2a64e5d828ce3485fa97ce1d51e8bdb2aee38214a2689b55e2215be0eea.scope - libcontainer container 0b60c2a64e5d828ce3485fa97ce1d51e8bdb2aee38214a2689b55e2215be0eea. Feb 13 19:02:55.006198 systemd[1]: Started cri-containerd-48a00b8c8867c635c2aad966385302e375587a3d15f973f76b675ca1de3ef1b8.scope - libcontainer container 48a00b8c8867c635c2aad966385302e375587a3d15f973f76b675ca1de3ef1b8. Feb 13 19:02:55.037091 containerd[1457]: time="2025-02-13T19:02:55.036341658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76a4520f627f8a717d415e3b98fa1b95,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2915d2a565e66b5964394796c6cfd3156cf13a4cda2b81719facb33875a171\"" Feb 13 19:02:55.039117 kubelet[2197]: E0213 19:02:55.039062 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:55.044367 containerd[1457]: time="2025-02-13T19:02:55.044333502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b60c2a64e5d828ce3485fa97ce1d51e8bdb2aee38214a2689b55e2215be0eea\"" Feb 13 19:02:55.044473 containerd[1457]: time="2025-02-13T19:02:55.044452563Z" level=info msg="CreateContainer within sandbox \"6e2915d2a565e66b5964394796c6cfd3156cf13a4cda2b81719facb33875a171\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:55.045165 kubelet[2197]: E0213 19:02:55.045140 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:55.046878 containerd[1457]: time="2025-02-13T19:02:55.046849281Z" level=info msg="CreateContainer within sandbox \"0b60c2a64e5d828ce3485fa97ce1d51e8bdb2aee38214a2689b55e2215be0eea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:55.049724 containerd[1457]: time="2025-02-13T19:02:55.049693665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"48a00b8c8867c635c2aad966385302e375587a3d15f973f76b675ca1de3ef1b8\"" Feb 13 19:02:55.050797 kubelet[2197]: E0213 19:02:55.050612 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:55.052652 containerd[1457]: time="2025-02-13T19:02:55.052604874Z" level=info msg="CreateContainer within sandbox \"48a00b8c8867c635c2aad966385302e375587a3d15f973f76b675ca1de3ef1b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:55.065810 containerd[1457]: time="2025-02-13T19:02:55.065704972Z" level=info msg="CreateContainer within sandbox \"0b60c2a64e5d828ce3485fa97ce1d51e8bdb2aee38214a2689b55e2215be0eea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8dff8f0cc7ff81eee26968cf8fe1950fa1d4416011d5f5e95bf5b6123ec61fd\"" Feb 13 19:02:55.066648 containerd[1457]: time="2025-02-13T19:02:55.066480964Z" level=info msg="StartContainer for \"c8dff8f0cc7ff81eee26968cf8fe1950fa1d4416011d5f5e95bf5b6123ec61fd\"" Feb 13 19:02:55.066879 containerd[1457]: time="2025-02-13T19:02:55.066548827Z" level=info msg="CreateContainer within sandbox \"6e2915d2a565e66b5964394796c6cfd3156cf13a4cda2b81719facb33875a171\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20fa4eda3df000cecbb3cca9a6974e54788c79f72cdd80afc33406c041dc6a16\"" Feb 13 19:02:55.067253 containerd[1457]: time="2025-02-13T19:02:55.067226341Z" level=info msg="StartContainer for \"20fa4eda3df000cecbb3cca9a6974e54788c79f72cdd80afc33406c041dc6a16\"" Feb 13 19:02:55.069985 containerd[1457]: time="2025-02-13T19:02:55.069940994Z" level=info msg="CreateContainer within sandbox \"48a00b8c8867c635c2aad966385302e375587a3d15f973f76b675ca1de3ef1b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78f5e0cc737d27d773c0f47afc3c55bec5b2ed85d67922de2271aa146be91c68\"" Feb 13 19:02:55.070594 containerd[1457]: time="2025-02-13T19:02:55.070522748Z" level=info msg="StartContainer for \"78f5e0cc737d27d773c0f47afc3c55bec5b2ed85d67922de2271aa146be91c68\"" Feb 13 19:02:55.104465 systemd[1]: Started cri-containerd-20fa4eda3df000cecbb3cca9a6974e54788c79f72cdd80afc33406c041dc6a16.scope - libcontainer container 20fa4eda3df000cecbb3cca9a6974e54788c79f72cdd80afc33406c041dc6a16. Feb 13 19:02:55.105031 kubelet[2197]: E0213 19:02:55.104984 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:55.105560 systemd[1]: Started cri-containerd-c8dff8f0cc7ff81eee26968cf8fe1950fa1d4416011d5f5e95bf5b6123ec61fd.scope - libcontainer container c8dff8f0cc7ff81eee26968cf8fe1950fa1d4416011d5f5e95bf5b6123ec61fd. Feb 13 19:02:55.109415 systemd[1]: Started cri-containerd-78f5e0cc737d27d773c0f47afc3c55bec5b2ed85d67922de2271aa146be91c68.scope - libcontainer container 78f5e0cc737d27d773c0f47afc3c55bec5b2ed85d67922de2271aa146be91c68. Feb 13 19:02:55.152717 containerd[1457]: time="2025-02-13T19:02:55.152670533Z" level=info msg="StartContainer for \"c8dff8f0cc7ff81eee26968cf8fe1950fa1d4416011d5f5e95bf5b6123ec61fd\" returns successfully" Feb 13 19:02:55.152819 containerd[1457]: time="2025-02-13T19:02:55.152766333Z" level=info msg="StartContainer for \"20fa4eda3df000cecbb3cca9a6974e54788c79f72cdd80afc33406c041dc6a16\" returns successfully" Feb 13 19:02:55.162135 containerd[1457]: time="2025-02-13T19:02:55.162094542Z" level=info msg="StartContainer for \"78f5e0cc737d27d773c0f47afc3c55bec5b2ed85d67922de2271aa146be91c68\" returns successfully" Feb 13 19:02:55.261292 kubelet[2197]: W0213 19:02:55.261166 2197 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 13 19:02:55.261292 kubelet[2197]: E0213 19:02:55.261232 2197 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:55.339009 kubelet[2197]: I0213 19:02:55.338486 2197 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:02:55.736472 kubelet[2197]: E0213 19:02:55.736363 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:55.737121 kubelet[2197]: E0213 19:02:55.736918 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:55.741745 kubelet[2197]: E0213 19:02:55.741718 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:55.741875 kubelet[2197]: E0213 19:02:55.741857 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:55.747636 kubelet[2197]: E0213 19:02:55.747480 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:55.747636 kubelet[2197]: E0213 19:02:55.747591 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:56.750111 kubelet[2197]: E0213 19:02:56.749898 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:56.750111 kubelet[2197]: E0213 19:02:56.749956 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:56.750111 kubelet[2197]: E0213 19:02:56.750034 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:56.750111 kubelet[2197]: E0213 19:02:56.750059 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:56.750500 kubelet[2197]: E0213 19:02:56.750221 2197 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:02:56.750500 kubelet[2197]: E0213 19:02:56.750291 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:02:57.146303 kubelet[2197]: E0213 19:02:57.146255 2197 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:02:57.248392 kubelet[2197]: I0213 19:02:57.248104 2197 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:02:57.248392 kubelet[2197]: E0213 19:02:57.248172 2197 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:02:57.293697 kubelet[2197]: E0213 19:02:57.293570 2197 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9d137666411 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:53.695992849 +0000 UTC m=+0.746542579,LastTimestamp:2025-02-13 19:02:53.695992849 +0000 UTC m=+0.746542579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:57.303570 kubelet[2197]: I0213 19:02:57.303302 2197 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:57.310048 kubelet[2197]: E0213 19:02:57.310012 2197 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:02:57.310048 kubelet[2197]: I0213 19:02:57.310043 2197 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:57.312280 kubelet[2197]: E0213 19:02:57.312066 2197 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:57.312280 kubelet[2197]: I0213 19:02:57.312098 2197 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:57.314359 kubelet[2197]: E0213 19:02:57.314334 2197 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:57.346815 kubelet[2197]: E0213 19:02:57.346652 2197 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9d137c59681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:53.702231681 +0000 UTC m=+0.752781371,LastTimestamp:2025-02-13 19:02:53.702231681 +0000 UTC m=+0.752781371,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:57.400382 kubelet[2197]: E0213 19:02:57.400201 2197 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9d1390a643d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:53.723518013 +0000 UTC m=+0.774067663,LastTimestamp:2025-02-13 19:02:53.723518013 +0000 UTC m=+0.774067663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:57.458063 kubelet[2197]: E0213 19:02:57.457341 2197 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9d1390a8f51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:02:53.723529041 +0000 UTC m=+0.774078731,LastTimestamp:2025-02-13 19:02:53.723529041 +0000 UTC m=+0.774078731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:02:57.696481 kubelet[2197]: I0213 19:02:57.696371 2197 apiserver.go:52] "Watching apiserver" Feb 13 19:02:57.701719 kubelet[2197]: I0213 19:02:57.701687 2197 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:02:59.410780 systemd[1]: Reload requested from client PID 2473 ('systemctl') (unit session-7.scope)... Feb 13 19:02:59.410795 systemd[1]: Reloading... Feb 13 19:02:59.483116 zram_generator::config[2523]: No configuration found. Feb 13 19:02:59.558452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:59.641550 systemd[1]: Reloading finished in 230 ms. Feb 13 19:02:59.663451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:59.676067 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:59.677162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:59.677225 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 126.4M memory peak. Feb 13 19:02:59.688326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:59.788129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:59.791476 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:59.826166 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:59.826166 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:59.826166 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:59.826508 kubelet[2559]: I0213 19:02:59.826210 2559 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:59.834907 kubelet[2559]: I0213 19:02:59.834877 2559 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:02:59.834907 kubelet[2559]: I0213 19:02:59.834906 2559 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:59.835176 kubelet[2559]: I0213 19:02:59.835158 2559 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:02:59.836516 kubelet[2559]: I0213 19:02:59.836467 2559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:02:59.838934 kubelet[2559]: I0213 19:02:59.838909 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:59.841653 kubelet[2559]: E0213 19:02:59.841625 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:59.841653 kubelet[2559]: I0213 19:02:59.841654 2559 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:59.844248 kubelet[2559]: I0213 19:02:59.844223 2559 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:59.844422 kubelet[2559]: I0213 19:02:59.844391 2559 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:59.844566 kubelet[2559]: I0213 19:02:59.844414 2559 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:59.844639 kubelet[2559]: I0213 19:02:59.844570 2559 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:59.844639 kubelet[2559]: I0213 19:02:59.844578 2559 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:02:59.844639 kubelet[2559]: I0213 19:02:59.844616 2559 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:59.844747 kubelet[2559]: I0213 19:02:59.844736 2559 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:02:59.844775 kubelet[2559]: I0213 19:02:59.844749 2559 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:59.844775 kubelet[2559]: I0213 19:02:59.844764 2559 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:02:59.844775 kubelet[2559]: I0213 19:02:59.844773 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:59.845936 kubelet[2559]: I0213 19:02:59.845461 2559 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:59.845936 kubelet[2559]: I0213 19:02:59.845903 2559 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:59.847096 kubelet[2559]: I0213 19:02:59.847051 2559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:02:59.847203 kubelet[2559]: I0213 19:02:59.847108 2559 server.go:1287] "Started kubelet" Feb 13 19:02:59.847567 kubelet[2559]: I0213 19:02:59.847525 2559 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:59.848956 kubelet[2559]: I0213 19:02:59.848926 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:59.857305 kubelet[2559]: I0213 19:02:59.848932 2559 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:02:59.857375 kubelet[2559]: I0213 19:02:59.849016 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:59.857966 kubelet[2559]: I0213 19:02:59.857511 2559 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:59.857966 kubelet[2559]: I0213 19:02:59.850873 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:59.862265 kubelet[2559]: I0213 19:02:59.862248 2559 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:02:59.863779 kubelet[2559]: I0213 19:02:59.862773 2559 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:59.863779 kubelet[2559]: I0213 19:02:59.863022 2559 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:59.864349 kubelet[2559]: E0213 19:02:59.864319 2559 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:59.865143 kubelet[2559]: I0213 19:02:59.865121 2559 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:59.865238 kubelet[2559]: I0213 19:02:59.865218 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:59.868305 kubelet[2559]: I0213 19:02:59.868277 2559 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:59.873955 kubelet[2559]: I0213 19:02:59.873925 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:59.876694 kubelet[2559]: I0213 19:02:59.876287 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:59.876694 kubelet[2559]: I0213 19:02:59.876311 2559 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:02:59.876694 kubelet[2559]: I0213 19:02:59.876330 2559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:02:59.876694 kubelet[2559]: I0213 19:02:59.876339 2559 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:02:59.876694 kubelet[2559]: E0213 19:02:59.876381 2559 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:59.901459 kubelet[2559]: I0213 19:02:59.901435 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:02:59.901459 kubelet[2559]: I0213 19:02:59.901452 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:59.901583 kubelet[2559]: I0213 19:02:59.901472 2559 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:59.901620 kubelet[2559]: I0213 19:02:59.901605 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:02:59.901650 kubelet[2559]: I0213 19:02:59.901620 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:02:59.901650 kubelet[2559]: I0213 19:02:59.901638 2559 policy_none.go:49] "None policy: Start" Feb 13 19:02:59.901650 kubelet[2559]: I0213 19:02:59.901646 2559 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:02:59.901705 kubelet[2559]: I0213 19:02:59.901655 2559 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:59.901770 kubelet[2559]: I0213 19:02:59.901760 2559 state_mem.go:75] "Updated machine memory state" Feb 13 19:02:59.906109 kubelet[2559]: I0213 19:02:59.905895 2559 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:59.906109 kubelet[2559]: I0213 19:02:59.906049 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:59.906109 kubelet[2559]: I0213 19:02:59.906061 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:59.906285 kubelet[2559]: I0213 19:02:59.906257 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:59.906285 kubelet[2559]: E0213 19:02:59.906837 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:02:59.977828 kubelet[2559]: I0213 19:02:59.977706 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:02:59.977935 kubelet[2559]: I0213 19:02:59.977889 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:02:59.978168 kubelet[2559]: I0213 19:02:59.978147 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.012378 kubelet[2559]: I0213 19:03:00.012300 2559 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:03:00.021733 kubelet[2559]: I0213 19:03:00.021704 2559 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:03:00.021835 kubelet[2559]: I0213 19:03:00.021789 2559 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:03:00.164333 kubelet[2559]: I0213 19:03:00.164290 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.164333 kubelet[2559]: I0213 19:03:00.164327 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:03:00.164496 kubelet[2559]: I0213 19:03:00.164348 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:03:00.164496 kubelet[2559]: I0213 19:03:00.164368 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.164496 kubelet[2559]: I0213 19:03:00.164387 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.164496 kubelet[2559]: I0213 19:03:00.164410 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76a4520f627f8a717d415e3b98fa1b95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76a4520f627f8a717d415e3b98fa1b95\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:03:00.164496 kubelet[2559]: I0213 19:03:00.164446 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.164605 kubelet[2559]: I0213 19:03:00.164464 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.164605 kubelet[2559]: I0213 19:03:00.164480 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:03:00.284524 kubelet[2559]: E0213 19:03:00.284496 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.287881 kubelet[2559]: E0213 19:03:00.287739 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.287881 kubelet[2559]: E0213 19:03:00.287770 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.405594 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:03:00.405891 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:03:00.842136 sudo[2597]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:00.845275 kubelet[2559]: I0213 19:03:00.845239 2559 apiserver.go:52] "Watching apiserver" Feb 13 19:03:00.863907 kubelet[2559]: I0213 19:03:00.863872 2559 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:03:00.885421 kubelet[2559]: I0213 19:03:00.885380 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.885822 kubelet[2559]: I0213 19:03:00.885603 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:03:00.885939 kubelet[2559]: I0213 19:03:00.885918 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:03:00.892746 kubelet[2559]: E0213 19:03:00.892699 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:03:00.892865 kubelet[2559]: E0213 19:03:00.892820 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:03:00.893713 kubelet[2559]: E0213 19:03:00.893683 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.894337 kubelet[2559]: E0213 19:03:00.894308 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.894413 kubelet[2559]: E0213 19:03:00.894397 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:03:00.895159 kubelet[2559]: E0213 19:03:00.894507 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:00.922162 kubelet[2559]: I0213 19:03:00.922055 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.922035788 podStartE2EDuration="1.922035788s" podCreationTimestamp="2025-02-13 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:00.911616186 +0000 UTC m=+1.116702790" watchObservedRunningTime="2025-02-13 19:03:00.922035788 +0000 UTC m=+1.127122352" Feb 13 19:03:00.932934 kubelet[2559]: I0213 19:03:00.932625 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.932607363 podStartE2EDuration="1.932607363s" podCreationTimestamp="2025-02-13 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:00.922270365 +0000 UTC m=+1.127356929" watchObservedRunningTime="2025-02-13 19:03:00.932607363 +0000 UTC m=+1.137693967" Feb 13 19:03:00.932934 kubelet[2559]: I0213 19:03:00.932791 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.932786085 podStartE2EDuration="1.932786085s" podCreationTimestamp="2025-02-13 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:00.931512042 +0000 UTC m=+1.136598686" watchObservedRunningTime="2025-02-13 19:03:00.932786085 +0000 UTC m=+1.137872689" Feb 13 19:03:01.886922 kubelet[2559]: E0213 19:03:01.886794 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:01.886922 kubelet[2559]: E0213 19:03:01.886867 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:01.887317 kubelet[2559]: E0213 19:03:01.887299 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:02.883562 sudo[1646]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:02.885123 sshd[1645]: Connection closed by 10.0.0.1 port 59840 Feb 13 19:03:02.886050 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:02.888787 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:59840.service: Deactivated successfully. Feb 13 19:03:02.891782 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:03:02.892666 systemd[1]: session-7.scope: Consumed 7.483s CPU time, 262.3M memory peak. Feb 13 19:03:02.894562 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:03:02.896567 systemd-logind[1444]: Removed session 7. Feb 13 19:03:04.947432 kubelet[2559]: I0213 19:03:04.947396 2559 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:03:04.947742 containerd[1457]: time="2025-02-13T19:03:04.947662473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:03:04.947911 kubelet[2559]: I0213 19:03:04.947805 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:03:05.901271 kubelet[2559]: I0213 19:03:05.900867 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwgw\" (UniqueName: \"kubernetes.io/projected/5dcc1e8d-0f3c-4afb-8c54-2527d20331b3-kube-api-access-9nwgw\") pod \"kube-proxy-mdczx\" (UID: \"5dcc1e8d-0f3c-4afb-8c54-2527d20331b3\") " pod="kube-system/kube-proxy-mdczx" Feb 13 19:03:05.901271 kubelet[2559]: I0213 19:03:05.900902 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5dcc1e8d-0f3c-4afb-8c54-2527d20331b3-kube-proxy\") pod \"kube-proxy-mdczx\" (UID: \"5dcc1e8d-0f3c-4afb-8c54-2527d20331b3\") " pod="kube-system/kube-proxy-mdczx" Feb 13 19:03:05.901271 kubelet[2559]: I0213 19:03:05.900923 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dcc1e8d-0f3c-4afb-8c54-2527d20331b3-xtables-lock\") pod \"kube-proxy-mdczx\" (UID: \"5dcc1e8d-0f3c-4afb-8c54-2527d20331b3\") " pod="kube-system/kube-proxy-mdczx" Feb 13 19:03:05.901271 kubelet[2559]: I0213 19:03:05.900939 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dcc1e8d-0f3c-4afb-8c54-2527d20331b3-lib-modules\") pod \"kube-proxy-mdczx\" (UID: \"5dcc1e8d-0f3c-4afb-8c54-2527d20331b3\") " pod="kube-system/kube-proxy-mdczx" Feb 13 19:03:05.905125 systemd[1]: Created slice kubepods-besteffort-pod5dcc1e8d_0f3c_4afb_8c54_2527d20331b3.slice - libcontainer container kubepods-besteffort-pod5dcc1e8d_0f3c_4afb_8c54_2527d20331b3.slice. Feb 13 19:03:05.921950 systemd[1]: Created slice kubepods-burstable-pod3ab3793a_0297_4286_8be4_d42700ea5ebc.slice - libcontainer container kubepods-burstable-pod3ab3793a_0297_4286_8be4_d42700ea5ebc.slice. Feb 13 19:03:06.056707 systemd[1]: Created slice kubepods-besteffort-podc098cca6_a9b4_43a5_9912_04de274fe4ab.slice - libcontainer container kubepods-besteffort-podc098cca6_a9b4_43a5_9912_04de274fe4ab.slice. Feb 13 19:03:06.101872 kubelet[2559]: I0213 19:03:06.101821 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-lib-modules\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.101872 kubelet[2559]: I0213 19:03:06.101877 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-hubble-tls\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.101898 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9czt\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-kube-api-access-l9czt\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.101959 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-bpf-maps\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.102001 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-cgroup\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.102023 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab3793a-0297-4286-8be4-d42700ea5ebc-clustermesh-secrets\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.102044 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cni-path\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102273 kubelet[2559]: I0213 19:03:06.102062 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-hostproc\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102077 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-etc-cni-netd\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102107 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-xtables-lock\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102124 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-config-path\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102138 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-kernel\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102157 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-run\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.102399 kubelet[2559]: I0213 19:03:06.102172 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-net\") pod \"cilium-pwkgw\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " pod="kube-system/cilium-pwkgw" Feb 13 19:03:06.203131 kubelet[2559]: I0213 19:03:06.202996 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlx8\" (UniqueName: \"kubernetes.io/projected/c098cca6-a9b4-43a5-9912-04de274fe4ab-kube-api-access-jjlx8\") pod \"cilium-operator-6c4d7847fc-8cghq\" (UID: \"c098cca6-a9b4-43a5-9912-04de274fe4ab\") " pod="kube-system/cilium-operator-6c4d7847fc-8cghq" Feb 13 19:03:06.203131 kubelet[2559]: I0213 19:03:06.203050 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c098cca6-a9b4-43a5-9912-04de274fe4ab-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8cghq\" (UID: \"c098cca6-a9b4-43a5-9912-04de274fe4ab\") " pod="kube-system/cilium-operator-6c4d7847fc-8cghq" Feb 13 19:03:06.214821 kubelet[2559]: E0213 19:03:06.214287 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.217619 containerd[1457]: time="2025-02-13T19:03:06.217378822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdczx,Uid:5dcc1e8d-0f3c-4afb-8c54-2527d20331b3,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:06.225213 kubelet[2559]: E0213 19:03:06.225174 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.229184 containerd[1457]: time="2025-02-13T19:03:06.229152924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwkgw,Uid:3ab3793a-0297-4286-8be4-d42700ea5ebc,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:06.348330 containerd[1457]: time="2025-02-13T19:03:06.348161709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:06.348330 containerd[1457]: time="2025-02-13T19:03:06.348205466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:06.348330 containerd[1457]: time="2025-02-13T19:03:06.348216306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.348330 containerd[1457]: time="2025-02-13T19:03:06.348288101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.360838 kubelet[2559]: E0213 19:03:06.360573 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.361355 containerd[1457]: time="2025-02-13T19:03:06.361177414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8cghq,Uid:c098cca6-a9b4-43a5-9912-04de274fe4ab,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:06.372292 systemd[1]: Started cri-containerd-be87281e07709da7c887f0c72a316701869a595acc9de0155bd8723bd65e3b49.scope - libcontainer container be87281e07709da7c887f0c72a316701869a595acc9de0155bd8723bd65e3b49. Feb 13 19:03:06.394774 containerd[1457]: time="2025-02-13T19:03:06.394645557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:06.394956 containerd[1457]: time="2025-02-13T19:03:06.394752751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:06.395070 containerd[1457]: time="2025-02-13T19:03:06.394991496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.395711 containerd[1457]: time="2025-02-13T19:03:06.395637655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.401760 containerd[1457]: time="2025-02-13T19:03:06.401658398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:06.401760 containerd[1457]: time="2025-02-13T19:03:06.401745633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:06.401846 containerd[1457]: time="2025-02-13T19:03:06.401761832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.402016 containerd[1457]: time="2025-02-13T19:03:06.401893983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:06.406402 containerd[1457]: time="2025-02-13T19:03:06.406371463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdczx,Uid:5dcc1e8d-0f3c-4afb-8c54-2527d20331b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"be87281e07709da7c887f0c72a316701869a595acc9de0155bd8723bd65e3b49\"" Feb 13 19:03:06.411129 kubelet[2559]: E0213 19:03:06.410985 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.416803 containerd[1457]: time="2025-02-13T19:03:06.416686097Z" level=info msg="CreateContainer within sandbox \"be87281e07709da7c887f0c72a316701869a595acc9de0155bd8723bd65e3b49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:03:06.431421 containerd[1457]: time="2025-02-13T19:03:06.431362097Z" level=info msg="CreateContainer within sandbox \"be87281e07709da7c887f0c72a316701869a595acc9de0155bd8723bd65e3b49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa3c5926d8940a3f929d0e671513c1c5edf5b766cc223ddd2fd2a099e05efd28\"" Feb 13 19:03:06.432025 containerd[1457]: time="2025-02-13T19:03:06.431929142Z" level=info msg="StartContainer for \"aa3c5926d8940a3f929d0e671513c1c5edf5b766cc223ddd2fd2a099e05efd28\"" Feb 13 19:03:06.435531 systemd[1]: Started cri-containerd-b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8.scope - libcontainer container b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8. Feb 13 19:03:06.437837 systemd[1]: Started cri-containerd-fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95.scope - libcontainer container fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95. Feb 13 19:03:06.466272 containerd[1457]: time="2025-02-13T19:03:06.465536157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwkgw,Uid:3ab3793a-0297-4286-8be4-d42700ea5ebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\"" Feb 13 19:03:06.481171 kubelet[2559]: E0213 19:03:06.481071 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.483262 containerd[1457]: time="2025-02-13T19:03:06.483128295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:03:06.487029 systemd[1]: Started cri-containerd-aa3c5926d8940a3f929d0e671513c1c5edf5b766cc223ddd2fd2a099e05efd28.scope - libcontainer container aa3c5926d8940a3f929d0e671513c1c5edf5b766cc223ddd2fd2a099e05efd28. Feb 13 19:03:06.504901 containerd[1457]: time="2025-02-13T19:03:06.504682465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8cghq,Uid:c098cca6-a9b4-43a5-9912-04de274fe4ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\"" Feb 13 19:03:06.505797 kubelet[2559]: E0213 19:03:06.505771 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.527205 containerd[1457]: time="2025-02-13T19:03:06.527154257Z" level=info msg="StartContainer for \"aa3c5926d8940a3f929d0e671513c1c5edf5b766cc223ddd2fd2a099e05efd28\" returns successfully" Feb 13 19:03:06.913624 kubelet[2559]: E0213 19:03:06.913190 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:06.926579 kubelet[2559]: I0213 19:03:06.926523 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdczx" podStartSLOduration=1.926505761 podStartE2EDuration="1.926505761s" podCreationTimestamp="2025-02-13 19:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:06.925669173 +0000 UTC m=+7.130755777" watchObservedRunningTime="2025-02-13 19:03:06.926505761 +0000 UTC m=+7.131592365" Feb 13 19:03:08.517106 kubelet[2559]: E0213 19:03:08.516983 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:08.917289 kubelet[2559]: E0213 19:03:08.917135 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:09.267583 kubelet[2559]: E0213 19:03:09.266860 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:09.611168 kubelet[2559]: E0213 19:03:09.611049 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:09.918156 kubelet[2559]: E0213 19:03:09.918038 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:09.918379 kubelet[2559]: E0213 19:03:09.918344 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:10.930019 kubelet[2559]: E0213 19:03:10.929828 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:11.775549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497155940.mount: Deactivated successfully. Feb 13 19:03:14.118566 update_engine[1445]: I20250213 19:03:14.118490 1445 update_attempter.cc:509] Updating boot flags... Feb 13 19:03:14.304148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2961) Feb 13 19:03:14.334040 containerd[1457]: time="2025-02-13T19:03:14.333940712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.344967 containerd[1457]: time="2025-02-13T19:03:14.344902461Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:03:14.347260 containerd[1457]: time="2025-02-13T19:03:14.346761505Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.348865 containerd[1457]: time="2025-02-13T19:03:14.348782942Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.865603049s" Feb 13 19:03:14.348865 containerd[1457]: time="2025-02-13T19:03:14.348838739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:03:14.354403 containerd[1457]: time="2025-02-13T19:03:14.354364712Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:03:14.361595 containerd[1457]: time="2025-02-13T19:03:14.361551057Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:14.390285 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2965) Feb 13 19:03:14.395342 containerd[1457]: time="2025-02-13T19:03:14.395296310Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\"" Feb 13 19:03:14.400829 containerd[1457]: time="2025-02-13T19:03:14.400771405Z" level=info msg="StartContainer for \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\"" Feb 13 19:03:14.445297 systemd[1]: Started cri-containerd-a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e.scope - libcontainer container a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e. Feb 13 19:03:14.476676 containerd[1457]: time="2025-02-13T19:03:14.476633607Z" level=info msg="StartContainer for \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\" returns successfully" Feb 13 19:03:14.570234 systemd[1]: cri-containerd-a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e.scope: Deactivated successfully. Feb 13 19:03:14.734826 containerd[1457]: time="2025-02-13T19:03:14.734570247Z" level=info msg="shim disconnected" id=a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e namespace=k8s.io Feb 13 19:03:14.734826 containerd[1457]: time="2025-02-13T19:03:14.734650044Z" level=warning msg="cleaning up after shim disconnected" id=a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e namespace=k8s.io Feb 13 19:03:14.734826 containerd[1457]: time="2025-02-13T19:03:14.734658643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:14.941927 kubelet[2559]: E0213 19:03:14.941344 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:14.945050 containerd[1457]: time="2025-02-13T19:03:14.945000839Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:14.968689 containerd[1457]: time="2025-02-13T19:03:14.968552031Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\"" Feb 13 19:03:14.973676 containerd[1457]: time="2025-02-13T19:03:14.973534386Z" level=info msg="StartContainer for \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\"" Feb 13 19:03:15.004310 systemd[1]: Started cri-containerd-fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267.scope - libcontainer container fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267. Feb 13 19:03:15.051929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:15.052159 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:15.052395 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:15.056504 containerd[1457]: time="2025-02-13T19:03:15.056434690Z" level=info msg="StartContainer for \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\" returns successfully" Feb 13 19:03:15.059536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:15.059747 systemd[1]: cri-containerd-fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267.scope: Deactivated successfully. Feb 13 19:03:15.090120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:15.163613 containerd[1457]: time="2025-02-13T19:03:15.163538222Z" level=info msg="shim disconnected" id=fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267 namespace=k8s.io Feb 13 19:03:15.163613 containerd[1457]: time="2025-02-13T19:03:15.163594019Z" level=warning msg="cleaning up after shim disconnected" id=fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267 namespace=k8s.io Feb 13 19:03:15.163613 containerd[1457]: time="2025-02-13T19:03:15.163605139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:15.378029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e-rootfs.mount: Deactivated successfully. Feb 13 19:03:15.956147 kubelet[2559]: E0213 19:03:15.956113 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:15.959900 containerd[1457]: time="2025-02-13T19:03:15.959764525Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:15.987570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095804590.mount: Deactivated successfully. Feb 13 19:03:15.998653 containerd[1457]: time="2025-02-13T19:03:15.998580688Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\"" Feb 13 19:03:15.999060 containerd[1457]: time="2025-02-13T19:03:15.999027670Z" level=info msg="StartContainer for \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\"" Feb 13 19:03:16.075381 systemd[1]: Started cri-containerd-36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09.scope - libcontainer container 36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09. Feb 13 19:03:16.113088 containerd[1457]: time="2025-02-13T19:03:16.109098408Z" level=info msg="StartContainer for \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\" returns successfully" Feb 13 19:03:16.167424 systemd[1]: cri-containerd-36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09.scope: Deactivated successfully. Feb 13 19:03:16.203511 containerd[1457]: time="2025-02-13T19:03:16.203449855Z" level=info msg="shim disconnected" id=36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09 namespace=k8s.io Feb 13 19:03:16.203511 containerd[1457]: time="2025-02-13T19:03:16.203504933Z" level=warning msg="cleaning up after shim disconnected" id=36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09 namespace=k8s.io Feb 13 19:03:16.203511 containerd[1457]: time="2025-02-13T19:03:16.203514452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:16.377378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09-rootfs.mount: Deactivated successfully. Feb 13 19:03:16.501458 containerd[1457]: time="2025-02-13T19:03:16.501402040Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:16.502784 containerd[1457]: time="2025-02-13T19:03:16.502733350Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:03:16.503841 containerd[1457]: time="2025-02-13T19:03:16.503809430Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:16.506021 containerd[1457]: time="2025-02-13T19:03:16.505981549Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.151427605s" Feb 13 19:03:16.506071 containerd[1457]: time="2025-02-13T19:03:16.506020148Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:03:16.511725 containerd[1457]: time="2025-02-13T19:03:16.511692377Z" level=info msg="CreateContainer within sandbox \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:03:16.529546 containerd[1457]: time="2025-02-13T19:03:16.529419277Z" level=info msg="CreateContainer within sandbox \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\"" Feb 13 19:03:16.530900 containerd[1457]: time="2025-02-13T19:03:16.530872102Z" level=info msg="StartContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\"" Feb 13 19:03:16.559280 systemd[1]: Started cri-containerd-2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36.scope - libcontainer container 2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36. Feb 13 19:03:16.596464 containerd[1457]: time="2025-02-13T19:03:16.596324345Z" level=info msg="StartContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" returns successfully" Feb 13 19:03:16.962524 kubelet[2559]: E0213 19:03:16.962481 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:16.967575 kubelet[2559]: E0213 19:03:16.967529 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:16.969940 containerd[1457]: time="2025-02-13T19:03:16.969899515Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:17.005272 kubelet[2559]: I0213 19:03:17.005179 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8cghq" podStartSLOduration=1.005189256 podStartE2EDuration="11.00516017s" podCreationTimestamp="2025-02-13 19:03:06 +0000 UTC" firstStartedPulling="2025-02-13 19:03:06.507044837 +0000 UTC m=+6.712131441" lastFinishedPulling="2025-02-13 19:03:16.507015751 +0000 UTC m=+16.712102355" observedRunningTime="2025-02-13 19:03:16.978121128 +0000 UTC m=+17.183207732" watchObservedRunningTime="2025-02-13 19:03:17.00516017 +0000 UTC m=+17.210246774" Feb 13 19:03:17.015550 containerd[1457]: time="2025-02-13T19:03:17.015420566Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\"" Feb 13 19:03:17.016723 containerd[1457]: time="2025-02-13T19:03:17.016545486Z" level=info msg="StartContainer for \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\"" Feb 13 19:03:17.040342 systemd[1]: Started cri-containerd-9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c.scope - libcontainer container 9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c. Feb 13 19:03:17.071425 systemd[1]: cri-containerd-9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c.scope: Deactivated successfully. Feb 13 19:03:17.073050 containerd[1457]: time="2025-02-13T19:03:17.072986684Z" level=info msg="StartContainer for \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\" returns successfully" Feb 13 19:03:17.253927 containerd[1457]: time="2025-02-13T19:03:17.253853865Z" level=info msg="shim disconnected" id=9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c namespace=k8s.io Feb 13 19:03:17.254157 containerd[1457]: time="2025-02-13T19:03:17.253931303Z" level=warning msg="cleaning up after shim disconnected" id=9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c namespace=k8s.io Feb 13 19:03:17.254157 containerd[1457]: time="2025-02-13T19:03:17.253942102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:17.971619 kubelet[2559]: E0213 19:03:17.971589 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:17.980040 kubelet[2559]: E0213 19:03:17.971673 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:17.981490 containerd[1457]: time="2025-02-13T19:03:17.981435247Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:18.017817 containerd[1457]: time="2025-02-13T19:03:18.017744586Z" level=info msg="CreateContainer within sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\"" Feb 13 19:03:18.019318 containerd[1457]: time="2025-02-13T19:03:18.019276814Z" level=info msg="StartContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\"" Feb 13 19:03:18.046629 systemd[1]: Started cri-containerd-d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3.scope - libcontainer container d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3. Feb 13 19:03:18.078601 containerd[1457]: time="2025-02-13T19:03:18.078532408Z" level=info msg="StartContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" returns successfully" Feb 13 19:03:18.251581 kubelet[2559]: I0213 19:03:18.251475 2559 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:03:18.304208 systemd[1]: Created slice kubepods-burstable-pod33c24966_01c9_4b70_81a9_df877f4c8fd5.slice - libcontainer container kubepods-burstable-pod33c24966_01c9_4b70_81a9_df877f4c8fd5.slice. Feb 13 19:03:18.310423 systemd[1]: Created slice kubepods-burstable-pod03127f5e_df59_4585_82bf_d748cda327e2.slice - libcontainer container kubepods-burstable-pod03127f5e_df59_4585_82bf_d748cda327e2.slice. Feb 13 19:03:18.393829 kubelet[2559]: I0213 19:03:18.393735 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33c24966-01c9-4b70-81a9-df877f4c8fd5-config-volume\") pod \"coredns-668d6bf9bc-fmcg7\" (UID: \"33c24966-01c9-4b70-81a9-df877f4c8fd5\") " pod="kube-system/coredns-668d6bf9bc-fmcg7" Feb 13 19:03:18.393829 kubelet[2559]: I0213 19:03:18.393780 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2d9s\" (UniqueName: \"kubernetes.io/projected/33c24966-01c9-4b70-81a9-df877f4c8fd5-kube-api-access-m2d9s\") pod \"coredns-668d6bf9bc-fmcg7\" (UID: \"33c24966-01c9-4b70-81a9-df877f4c8fd5\") " pod="kube-system/coredns-668d6bf9bc-fmcg7" Feb 13 19:03:18.393829 kubelet[2559]: I0213 19:03:18.393802 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03127f5e-df59-4585-82bf-d748cda327e2-config-volume\") pod \"coredns-668d6bf9bc-26vr4\" (UID: \"03127f5e-df59-4585-82bf-d748cda327e2\") " pod="kube-system/coredns-668d6bf9bc-26vr4" Feb 13 19:03:18.394020 kubelet[2559]: I0213 19:03:18.393884 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnmkk\" (UniqueName: \"kubernetes.io/projected/03127f5e-df59-4585-82bf-d748cda327e2-kube-api-access-jnmkk\") pod \"coredns-668d6bf9bc-26vr4\" (UID: \"03127f5e-df59-4585-82bf-d748cda327e2\") " pod="kube-system/coredns-668d6bf9bc-26vr4" Feb 13 19:03:18.609050 kubelet[2559]: E0213 19:03:18.609001 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.609846 containerd[1457]: time="2025-02-13T19:03:18.609811308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmcg7,Uid:33c24966-01c9-4b70-81a9-df877f4c8fd5,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:18.614238 kubelet[2559]: E0213 19:03:18.614153 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:18.616127 containerd[1457]: time="2025-02-13T19:03:18.614738981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26vr4,Uid:03127f5e-df59-4585-82bf-d748cda327e2,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:18.976436 kubelet[2559]: E0213 19:03:18.976331 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:19.978019 kubelet[2559]: E0213 19:03:19.977993 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:20.446852 systemd-networkd[1400]: cilium_host: Link UP Feb 13 19:03:20.448219 systemd-networkd[1400]: cilium_net: Link UP Feb 13 19:03:20.448479 systemd-networkd[1400]: cilium_net: Gained carrier Feb 13 19:03:20.448769 systemd-networkd[1400]: cilium_host: Gained carrier Feb 13 19:03:20.448957 systemd-networkd[1400]: cilium_net: Gained IPv6LL Feb 13 19:03:20.449241 systemd-networkd[1400]: cilium_host: Gained IPv6LL Feb 13 19:03:20.535153 systemd-networkd[1400]: cilium_vxlan: Link UP Feb 13 19:03:20.535162 systemd-networkd[1400]: cilium_vxlan: Gained carrier Feb 13 19:03:20.859124 kernel: NET: Registered PF_ALG protocol family Feb 13 19:03:20.979387 kubelet[2559]: E0213 19:03:20.979354 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:21.458922 systemd-networkd[1400]: lxc_health: Link UP Feb 13 19:03:21.467856 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 19:03:21.776174 kernel: eth0: renamed from tmpd4e2b Feb 13 19:03:21.790855 systemd-networkd[1400]: lxcc016007566ff: Link UP Feb 13 19:03:21.791051 systemd-networkd[1400]: lxc6672de348ede: Link UP Feb 13 19:03:21.799126 kernel: eth0: renamed from tmpabdda Feb 13 19:03:21.804163 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Feb 13 19:03:21.804978 systemd-networkd[1400]: lxc6672de348ede: Gained carrier Feb 13 19:03:21.806454 systemd-networkd[1400]: lxcc016007566ff: Gained carrier Feb 13 19:03:22.282524 kubelet[2559]: E0213 19:03:22.282486 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:22.313247 kubelet[2559]: I0213 19:03:22.313162 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pwkgw" podStartSLOduration=9.441463466 podStartE2EDuration="17.313147171s" podCreationTimestamp="2025-02-13 19:03:05 +0000 UTC" firstStartedPulling="2025-02-13 19:03:06.482487455 +0000 UTC m=+6.687574059" lastFinishedPulling="2025-02-13 19:03:14.35417116 +0000 UTC m=+14.559257764" observedRunningTime="2025-02-13 19:03:18.996253789 +0000 UTC m=+19.201340393" watchObservedRunningTime="2025-02-13 19:03:22.313147171 +0000 UTC m=+22.518233775" Feb 13 19:03:22.982610 kubelet[2559]: E0213 19:03:22.982518 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:23.121572 systemd-networkd[1400]: lxcc016007566ff: Gained IPv6LL Feb 13 19:03:23.185537 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 19:03:23.634372 systemd-networkd[1400]: lxc6672de348ede: Gained IPv6LL Feb 13 19:03:23.986689 kubelet[2559]: E0213 19:03:23.986437 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:25.541462 containerd[1457]: time="2025-02-13T19:03:25.541367103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:25.541462 containerd[1457]: time="2025-02-13T19:03:25.541427501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:25.541462 containerd[1457]: time="2025-02-13T19:03:25.541438901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.542743 containerd[1457]: time="2025-02-13T19:03:25.542612512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.546509 containerd[1457]: time="2025-02-13T19:03:25.544499905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:25.546509 containerd[1457]: time="2025-02-13T19:03:25.544559183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:25.546509 containerd[1457]: time="2025-02-13T19:03:25.544576263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.546509 containerd[1457]: time="2025-02-13T19:03:25.544861296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.563297 systemd[1]: Started cri-containerd-abdda9f55abd83972375332cbdf5dff563c628db4c1c6d329c9e20c7e0c1e864.scope - libcontainer container abdda9f55abd83972375332cbdf5dff563c628db4c1c6d329c9e20c7e0c1e864. Feb 13 19:03:25.565130 systemd[1]: Started cri-containerd-d4e2bd9bfc9566572e7a048e60896bd25d22b3ae24abe4db0ccd4e84a5734cdb.scope - libcontainer container d4e2bd9bfc9566572e7a048e60896bd25d22b3ae24abe4db0ccd4e84a5734cdb. Feb 13 19:03:25.574606 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:25.577368 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:25.597629 containerd[1457]: time="2025-02-13T19:03:25.597589063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmcg7,Uid:33c24966-01c9-4b70-81a9-df877f4c8fd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"abdda9f55abd83972375332cbdf5dff563c628db4c1c6d329c9e20c7e0c1e864\"" Feb 13 19:03:25.598397 kubelet[2559]: E0213 19:03:25.598348 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:25.599315 containerd[1457]: time="2025-02-13T19:03:25.599287861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26vr4,Uid:03127f5e-df59-4585-82bf-d748cda327e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4e2bd9bfc9566572e7a048e60896bd25d22b3ae24abe4db0ccd4e84a5734cdb\"" Feb 13 19:03:25.600554 kubelet[2559]: E0213 19:03:25.600396 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:25.603270 containerd[1457]: time="2025-02-13T19:03:25.603241722Z" level=info msg="CreateContainer within sandbox \"d4e2bd9bfc9566572e7a048e60896bd25d22b3ae24abe4db0ccd4e84a5734cdb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:25.604558 containerd[1457]: time="2025-02-13T19:03:25.604517091Z" level=info msg="CreateContainer within sandbox \"abdda9f55abd83972375332cbdf5dff563c628db4c1c6d329c9e20c7e0c1e864\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:25.622775 containerd[1457]: time="2025-02-13T19:03:25.622722637Z" level=info msg="CreateContainer within sandbox \"abdda9f55abd83972375332cbdf5dff563c628db4c1c6d329c9e20c7e0c1e864\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"189847343c6932ac30f9bc7a06cbef8d952a8cda7ba7b48cfbc557f5ad9ee867\"" Feb 13 19:03:25.623525 containerd[1457]: time="2025-02-13T19:03:25.623466259Z" level=info msg="StartContainer for \"189847343c6932ac30f9bc7a06cbef8d952a8cda7ba7b48cfbc557f5ad9ee867\"" Feb 13 19:03:25.625129 containerd[1457]: time="2025-02-13T19:03:25.624964622Z" level=info msg="CreateContainer within sandbox \"d4e2bd9bfc9566572e7a048e60896bd25d22b3ae24abe4db0ccd4e84a5734cdb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf5b67f79245b372648e172b59102a61b9706e975a96457baa1db89692e45aaf\"" Feb 13 19:03:25.625620 containerd[1457]: time="2025-02-13T19:03:25.625563687Z" level=info msg="StartContainer for \"cf5b67f79245b372648e172b59102a61b9706e975a96457baa1db89692e45aaf\"" Feb 13 19:03:25.653266 systemd[1]: Started cri-containerd-189847343c6932ac30f9bc7a06cbef8d952a8cda7ba7b48cfbc557f5ad9ee867.scope - libcontainer container 189847343c6932ac30f9bc7a06cbef8d952a8cda7ba7b48cfbc557f5ad9ee867. Feb 13 19:03:25.654346 systemd[1]: Started cri-containerd-cf5b67f79245b372648e172b59102a61b9706e975a96457baa1db89692e45aaf.scope - libcontainer container cf5b67f79245b372648e172b59102a61b9706e975a96457baa1db89692e45aaf. Feb 13 19:03:25.696125 containerd[1457]: time="2025-02-13T19:03:25.693923665Z" level=info msg="StartContainer for \"189847343c6932ac30f9bc7a06cbef8d952a8cda7ba7b48cfbc557f5ad9ee867\" returns successfully" Feb 13 19:03:25.712168 containerd[1457]: time="2025-02-13T19:03:25.711433789Z" level=info msg="StartContainer for \"cf5b67f79245b372648e172b59102a61b9706e975a96457baa1db89692e45aaf\" returns successfully" Feb 13 19:03:26.003186 kubelet[2559]: E0213 19:03:26.002484 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:26.006327 kubelet[2559]: E0213 19:03:26.005933 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:26.017153 kubelet[2559]: I0213 19:03:26.016586 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-26vr4" podStartSLOduration=20.016567889 podStartE2EDuration="20.016567889s" podCreationTimestamp="2025-02-13 19:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:26.015299879 +0000 UTC m=+26.220386483" watchObservedRunningTime="2025-02-13 19:03:26.016567889 +0000 UTC m=+26.221654493" Feb 13 19:03:26.044683 kubelet[2559]: I0213 19:03:26.044603 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fmcg7" podStartSLOduration=20.044583419 podStartE2EDuration="20.044583419s" podCreationTimestamp="2025-02-13 19:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:26.044334105 +0000 UTC m=+26.249420709" watchObservedRunningTime="2025-02-13 19:03:26.044583419 +0000 UTC m=+26.249670023" Feb 13 19:03:26.547256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525821584.mount: Deactivated successfully. Feb 13 19:03:27.020245 kubelet[2559]: E0213 19:03:27.020004 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:27.020245 kubelet[2559]: E0213 19:03:27.020164 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:27.320587 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:48624.service - OpenSSH per-connection server daemon (10.0.0.1:48624). Feb 13 19:03:27.367877 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 48624 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:27.371502 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:27.376844 systemd-logind[1444]: New session 8 of user core. Feb 13 19:03:27.387304 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:27.518000 sshd[3970]: Connection closed by 10.0.0.1 port 48624 Feb 13 19:03:27.517858 sshd-session[3968]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:27.521437 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:48624.service: Deactivated successfully. Feb 13 19:03:27.523311 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:27.523968 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:27.524827 systemd-logind[1444]: Removed session 8. Feb 13 19:03:28.021608 kubelet[2559]: E0213 19:03:28.021580 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:32.529650 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:58520.service - OpenSSH per-connection server daemon (10.0.0.1:58520). Feb 13 19:03:32.570785 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 58520 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:32.572167 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:32.576904 systemd-logind[1444]: New session 9 of user core. Feb 13 19:03:32.587327 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:32.697789 sshd[3988]: Connection closed by 10.0.0.1 port 58520 Feb 13 19:03:32.696940 sshd-session[3986]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:32.700474 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:58520.service: Deactivated successfully. Feb 13 19:03:32.702338 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:32.703027 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:32.703799 systemd-logind[1444]: Removed session 9. Feb 13 19:03:37.717520 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:58532.service - OpenSSH per-connection server daemon (10.0.0.1:58532). Feb 13 19:03:37.755218 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 58532 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:37.756567 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:37.765175 systemd-logind[1444]: New session 10 of user core. Feb 13 19:03:37.772319 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:37.911933 sshd[4007]: Connection closed by 10.0.0.1 port 58532 Feb 13 19:03:37.911559 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:37.914886 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:58532.service: Deactivated successfully. Feb 13 19:03:37.916473 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:37.918475 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:37.924858 systemd-logind[1444]: Removed session 10. Feb 13 19:03:42.924448 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:40246.service - OpenSSH per-connection server daemon (10.0.0.1:40246). Feb 13 19:03:42.971469 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 40246 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:42.972991 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:42.977719 systemd-logind[1444]: New session 11 of user core. Feb 13 19:03:42.988282 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:03:43.107226 sshd[4023]: Connection closed by 10.0.0.1 port 40246 Feb 13 19:03:43.107835 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:43.118024 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:40246.service: Deactivated successfully. Feb 13 19:03:43.119944 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:03:43.120657 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:03:43.134415 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:40252.service - OpenSSH per-connection server daemon (10.0.0.1:40252). Feb 13 19:03:43.135727 systemd-logind[1444]: Removed session 11. Feb 13 19:03:43.170824 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 40252 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:43.172003 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:43.175831 systemd-logind[1444]: New session 12 of user core. Feb 13 19:03:43.182263 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:03:43.351356 sshd[4039]: Connection closed by 10.0.0.1 port 40252 Feb 13 19:03:43.351760 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:43.368234 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:40252.service: Deactivated successfully. Feb 13 19:03:43.370687 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:03:43.373015 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:03:43.390098 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:40264.service - OpenSSH per-connection server daemon (10.0.0.1:40264). Feb 13 19:03:43.391595 systemd-logind[1444]: Removed session 12. Feb 13 19:03:43.435640 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 40264 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:43.436945 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:43.441205 systemd-logind[1444]: New session 13 of user core. Feb 13 19:03:43.456285 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:03:43.583665 sshd[4055]: Connection closed by 10.0.0.1 port 40264 Feb 13 19:03:43.584023 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:43.587470 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:40264.service: Deactivated successfully. Feb 13 19:03:43.589677 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:03:43.591834 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:03:43.592580 systemd-logind[1444]: Removed session 13. Feb 13 19:03:48.596119 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:40276.service - OpenSSH per-connection server daemon (10.0.0.1:40276). Feb 13 19:03:48.636598 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 40276 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:48.638163 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:48.642381 systemd-logind[1444]: New session 14 of user core. Feb 13 19:03:48.653272 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:03:48.767756 sshd[4070]: Connection closed by 10.0.0.1 port 40276 Feb 13 19:03:48.768107 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:48.771975 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:40276.service: Deactivated successfully. Feb 13 19:03:48.773720 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:03:48.774466 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:03:48.775404 systemd-logind[1444]: Removed session 14. Feb 13 19:03:53.779687 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:50122.service - OpenSSH per-connection server daemon (10.0.0.1:50122). Feb 13 19:03:53.823342 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 50122 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:53.824601 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:53.829444 systemd-logind[1444]: New session 15 of user core. Feb 13 19:03:53.836275 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:03:53.965216 sshd[4085]: Connection closed by 10.0.0.1 port 50122 Feb 13 19:03:53.966352 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:53.984842 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:50122.service: Deactivated successfully. Feb 13 19:03:53.986645 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:03:53.987747 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:03:53.990713 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:50130.service - OpenSSH per-connection server daemon (10.0.0.1:50130). Feb 13 19:03:53.992369 systemd-logind[1444]: Removed session 15. Feb 13 19:03:54.045103 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 50130 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:54.046317 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:54.051422 systemd-logind[1444]: New session 16 of user core. Feb 13 19:03:54.063282 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:03:54.283225 sshd[4100]: Connection closed by 10.0.0.1 port 50130 Feb 13 19:03:54.283711 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:54.295709 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:50130.service: Deactivated successfully. Feb 13 19:03:54.297852 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:03:54.298765 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:03:54.309937 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:50136.service - OpenSSH per-connection server daemon (10.0.0.1:50136). Feb 13 19:03:54.312676 systemd-logind[1444]: Removed session 16. Feb 13 19:03:54.362419 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 50136 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:54.364485 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:54.372134 systemd-logind[1444]: New session 17 of user core. Feb 13 19:03:54.383284 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:03:55.161654 sshd[4114]: Connection closed by 10.0.0.1 port 50136 Feb 13 19:03:55.162047 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.176376 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:50136.service: Deactivated successfully. Feb 13 19:03:55.178031 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:03:55.181254 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:03:55.196854 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:50144.service - OpenSSH per-connection server daemon (10.0.0.1:50144). Feb 13 19:03:55.198096 systemd-logind[1444]: Removed session 17. Feb 13 19:03:55.238769 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 50144 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:55.240053 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.244700 systemd-logind[1444]: New session 18 of user core. Feb 13 19:03:55.254289 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:03:55.481266 sshd[4137]: Connection closed by 10.0.0.1 port 50144 Feb 13 19:03:55.481846 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.494592 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:50144.service: Deactivated successfully. Feb 13 19:03:55.498193 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:03:55.500850 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:03:55.514463 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Feb 13 19:03:55.515760 systemd-logind[1444]: Removed session 18. Feb 13 19:03:55.565523 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:55.566889 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.572456 systemd-logind[1444]: New session 19 of user core. Feb 13 19:03:55.586301 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:03:55.701056 sshd[4150]: Connection closed by 10.0.0.1 port 50156 Feb 13 19:03:55.701443 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.705539 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:50156.service: Deactivated successfully. Feb 13 19:03:55.707601 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:03:55.709585 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:03:55.710819 systemd-logind[1444]: Removed session 19. Feb 13 19:04:00.712469 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:50164.service - OpenSSH per-connection server daemon (10.0.0.1:50164). Feb 13 19:04:00.752997 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 50164 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:00.754346 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:00.758404 systemd-logind[1444]: New session 20 of user core. Feb 13 19:04:00.765241 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:04:00.875355 sshd[4169]: Connection closed by 10.0.0.1 port 50164 Feb 13 19:04:00.875727 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:00.879769 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:50164.service: Deactivated successfully. Feb 13 19:04:00.881915 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:04:00.882682 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:04:00.883645 systemd-logind[1444]: Removed session 20. Feb 13 19:04:05.888794 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). Feb 13 19:04:05.930717 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:05.932570 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:05.938189 systemd-logind[1444]: New session 21 of user core. Feb 13 19:04:05.948373 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:04:06.059477 sshd[4184]: Connection closed by 10.0.0.1 port 51964 Feb 13 19:04:06.060044 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:06.063865 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:51964.service: Deactivated successfully. Feb 13 19:04:06.066906 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:04:06.067666 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:04:06.068766 systemd-logind[1444]: Removed session 21. Feb 13 19:04:09.881300 kubelet[2559]: E0213 19:04:09.881256 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:10.877706 kubelet[2559]: E0213 19:04:10.877665 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:11.090501 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Feb 13 19:04:11.132698 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:11.134143 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:11.139778 systemd-logind[1444]: New session 22 of user core. Feb 13 19:04:11.148305 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:04:11.296268 sshd[4202]: Connection closed by 10.0.0.1 port 51976 Feb 13 19:04:11.298314 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:11.312825 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:51976.service: Deactivated successfully. Feb 13 19:04:11.314793 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:04:11.316481 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:04:11.324760 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:51990.service - OpenSSH per-connection server daemon (10.0.0.1:51990). Feb 13 19:04:11.328700 systemd-logind[1444]: Removed session 22. Feb 13 19:04:11.368148 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 51990 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:11.369302 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:11.375221 systemd-logind[1444]: New session 23 of user core. Feb 13 19:04:11.383323 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:04:13.530576 containerd[1457]: time="2025-02-13T19:04:13.530469253Z" level=info msg="StopContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" with timeout 30 (s)" Feb 13 19:04:13.531849 containerd[1457]: time="2025-02-13T19:04:13.531190909Z" level=info msg="Stop container \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" with signal terminated" Feb 13 19:04:13.560757 systemd[1]: cri-containerd-2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36.scope: Deactivated successfully. Feb 13 19:04:13.584240 containerd[1457]: time="2025-02-13T19:04:13.584188757Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:04:13.587432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.591542 containerd[1457]: time="2025-02-13T19:04:13.591460563Z" level=info msg="StopContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" with timeout 2 (s)" Feb 13 19:04:13.591923 containerd[1457]: time="2025-02-13T19:04:13.591675367Z" level=info msg="Stop container \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" with signal terminated" Feb 13 19:04:13.595134 containerd[1457]: time="2025-02-13T19:04:13.595044964Z" level=info msg="shim disconnected" id=2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36 namespace=k8s.io Feb 13 19:04:13.595134 containerd[1457]: time="2025-02-13T19:04:13.595114966Z" level=warning msg="cleaning up after shim disconnected" id=2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36 namespace=k8s.io Feb 13 19:04:13.595134 containerd[1457]: time="2025-02-13T19:04:13.595123166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.599357 systemd-networkd[1400]: lxc_health: Link DOWN Feb 13 19:04:13.599365 systemd-networkd[1400]: lxc_health: Lost carrier Feb 13 19:04:13.610761 systemd[1]: cri-containerd-d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3.scope: Deactivated successfully. Feb 13 19:04:13.611126 systemd[1]: cri-containerd-d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3.scope: Consumed 6.872s CPU time, 123.3M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:04:13.631902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.638140 containerd[1457]: time="2025-02-13T19:04:13.638065024Z" level=info msg="shim disconnected" id=d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3 namespace=k8s.io Feb 13 19:04:13.638140 containerd[1457]: time="2025-02-13T19:04:13.638137066Z" level=warning msg="cleaning up after shim disconnected" id=d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3 namespace=k8s.io Feb 13 19:04:13.638140 containerd[1457]: time="2025-02-13T19:04:13.638145906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.646206 containerd[1457]: time="2025-02-13T19:04:13.646145168Z" level=info msg="StopContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" returns successfully" Feb 13 19:04:13.648772 containerd[1457]: time="2025-02-13T19:04:13.646976147Z" level=info msg="StopPodSandbox for \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\"" Feb 13 19:04:13.648772 containerd[1457]: time="2025-02-13T19:04:13.647013628Z" level=info msg="Container to stop \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.648734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8-shm.mount: Deactivated successfully. Feb 13 19:04:13.655422 systemd[1]: cri-containerd-b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8.scope: Deactivated successfully. Feb 13 19:04:13.679249 containerd[1457]: time="2025-02-13T19:04:13.679199561Z" level=info msg="StopContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" returns successfully" Feb 13 19:04:13.680786 containerd[1457]: time="2025-02-13T19:04:13.680753677Z" level=info msg="StopPodSandbox for \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\"" Feb 13 19:04:13.680892 containerd[1457]: time="2025-02-13T19:04:13.680800198Z" level=info msg="Container to stop \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.680892 containerd[1457]: time="2025-02-13T19:04:13.680812918Z" level=info msg="Container to stop \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.680892 containerd[1457]: time="2025-02-13T19:04:13.680825758Z" level=info msg="Container to stop \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.680892 containerd[1457]: time="2025-02-13T19:04:13.680834559Z" level=info msg="Container to stop \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.680892 containerd[1457]: time="2025-02-13T19:04:13.680842759Z" level=info msg="Container to stop \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.682797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95-shm.mount: Deactivated successfully. Feb 13 19:04:13.686915 systemd[1]: cri-containerd-fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95.scope: Deactivated successfully. Feb 13 19:04:13.717430 containerd[1457]: time="2025-02-13T19:04:13.717363631Z" level=info msg="shim disconnected" id=b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8 namespace=k8s.io Feb 13 19:04:13.717430 containerd[1457]: time="2025-02-13T19:04:13.717421912Z" level=warning msg="cleaning up after shim disconnected" id=b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8 namespace=k8s.io Feb 13 19:04:13.717430 containerd[1457]: time="2025-02-13T19:04:13.717430352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.723726 containerd[1457]: time="2025-02-13T19:04:13.723654854Z" level=info msg="shim disconnected" id=fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95 namespace=k8s.io Feb 13 19:04:13.723726 containerd[1457]: time="2025-02-13T19:04:13.723726456Z" level=warning msg="cleaning up after shim disconnected" id=fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95 namespace=k8s.io Feb 13 19:04:13.723726 containerd[1457]: time="2025-02-13T19:04:13.723735456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.733414 containerd[1457]: time="2025-02-13T19:04:13.733286994Z" level=info msg="TearDown network for sandbox \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\" successfully" Feb 13 19:04:13.733414 containerd[1457]: time="2025-02-13T19:04:13.733334555Z" level=info msg="StopPodSandbox for \"b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8\" returns successfully" Feb 13 19:04:13.737290 containerd[1457]: time="2025-02-13T19:04:13.737243764Z" level=info msg="TearDown network for sandbox \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" successfully" Feb 13 19:04:13.737290 containerd[1457]: time="2025-02-13T19:04:13.737280125Z" level=info msg="StopPodSandbox for \"fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95\" returns successfully" Feb 13 19:04:13.847866 kubelet[2559]: I0213 19:04:13.847742 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-bpf-maps\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.847866 kubelet[2559]: I0213 19:04:13.847805 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-cgroup\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.847866 kubelet[2559]: I0213 19:04:13.847836 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-etc-cni-netd\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847882 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c098cca6-a9b4-43a5-9912-04de274fe4ab-cilium-config-path\") pod \"c098cca6-a9b4-43a5-9912-04de274fe4ab\" (UID: \"c098cca6-a9b4-43a5-9912-04de274fe4ab\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847906 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-kernel\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847922 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cni-path\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847943 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-config-path\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847981 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-net\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.848897 kubelet[2559]: I0213 19:04:13.847998 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-hostproc\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848013 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-xtables-lock\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848026 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-lib-modules\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848051 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-hubble-tls\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848068 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab3793a-0297-4286-8be4-d42700ea5ebc-clustermesh-secrets\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848100 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9czt\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-kube-api-access-l9czt\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849029 kubelet[2559]: I0213 19:04:13.848121 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-run\") pod \"3ab3793a-0297-4286-8be4-d42700ea5ebc\" (UID: \"3ab3793a-0297-4286-8be4-d42700ea5ebc\") " Feb 13 19:04:13.849168 kubelet[2559]: I0213 19:04:13.848137 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjlx8\" (UniqueName: \"kubernetes.io/projected/c098cca6-a9b4-43a5-9912-04de274fe4ab-kube-api-access-jjlx8\") pod \"c098cca6-a9b4-43a5-9912-04de274fe4ab\" (UID: \"c098cca6-a9b4-43a5-9912-04de274fe4ab\") " Feb 13 19:04:13.851189 kubelet[2559]: I0213 19:04:13.851141 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.851258 kubelet[2559]: I0213 19:04:13.851219 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.851415 kubelet[2559]: I0213 19:04:13.851379 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.851485 kubelet[2559]: I0213 19:04:13.851464 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.851516 kubelet[2559]: I0213 19:04:13.851490 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.851717 kubelet[2559]: I0213 19:04:13.851690 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.853345 kubelet[2559]: I0213 19:04:13.853312 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c098cca6-a9b4-43a5-9912-04de274fe4ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c098cca6-a9b4-43a5-9912-04de274fe4ab" (UID: "c098cca6-a9b4-43a5-9912-04de274fe4ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:04:13.853393 kubelet[2559]: I0213 19:04:13.853379 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.853419 kubelet[2559]: I0213 19:04:13.853409 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.853445 kubelet[2559]: I0213 19:04:13.853427 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.853884 kubelet[2559]: I0213 19:04:13.853844 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:04:13.853912 kubelet[2559]: I0213 19:04:13.853903 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.857845 kubelet[2559]: I0213 19:04:13.857803 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-kube-api-access-l9czt" (OuterVolumeSpecName: "kube-api-access-l9czt") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "kube-api-access-l9czt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:13.859900 kubelet[2559]: I0213 19:04:13.859823 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3793a-0297-4286-8be4-d42700ea5ebc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:04:13.861714 kubelet[2559]: I0213 19:04:13.861682 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ab3793a-0297-4286-8be4-d42700ea5ebc" (UID: "3ab3793a-0297-4286-8be4-d42700ea5ebc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:13.862529 kubelet[2559]: I0213 19:04:13.862494 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c098cca6-a9b4-43a5-9912-04de274fe4ab-kube-api-access-jjlx8" (OuterVolumeSpecName: "kube-api-access-jjlx8") pod "c098cca6-a9b4-43a5-9912-04de274fe4ab" (UID: "c098cca6-a9b4-43a5-9912-04de274fe4ab"). InnerVolumeSpecName "kube-api-access-jjlx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:13.885075 systemd[1]: Removed slice kubepods-besteffort-podc098cca6_a9b4_43a5_9912_04de274fe4ab.slice - libcontainer container kubepods-besteffort-podc098cca6_a9b4_43a5_9912_04de274fe4ab.slice. Feb 13 19:04:13.890443 systemd[1]: Removed slice kubepods-burstable-pod3ab3793a_0297_4286_8be4_d42700ea5ebc.slice - libcontainer container kubepods-burstable-pod3ab3793a_0297_4286_8be4_d42700ea5ebc.slice. Feb 13 19:04:13.890551 systemd[1]: kubepods-burstable-pod3ab3793a_0297_4286_8be4_d42700ea5ebc.slice: Consumed 7.082s CPU time, 123.6M memory peak, 160K read from disk, 12.9M written to disk. Feb 13 19:04:13.948512 kubelet[2559]: I0213 19:04:13.948458 2559 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948512 kubelet[2559]: I0213 19:04:13.948494 2559 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948512 kubelet[2559]: I0213 19:04:13.948503 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948512 kubelet[2559]: I0213 19:04:13.948512 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c098cca6-a9b4-43a5-9912-04de274fe4ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948512 kubelet[2559]: I0213 19:04:13.948523 2559 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948533 2559 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948541 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948548 2559 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948556 2559 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948563 2559 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948570 2559 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948577 2559 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948759 kubelet[2559]: I0213 19:04:13.948585 2559 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab3793a-0297-4286-8be4-d42700ea5ebc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948922 kubelet[2559]: I0213 19:04:13.948592 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9czt\" (UniqueName: \"kubernetes.io/projected/3ab3793a-0297-4286-8be4-d42700ea5ebc-kube-api-access-l9czt\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948922 kubelet[2559]: I0213 19:04:13.948600 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab3793a-0297-4286-8be4-d42700ea5ebc-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:13.948922 kubelet[2559]: I0213 19:04:13.948610 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjlx8\" (UniqueName: \"kubernetes.io/projected/c098cca6-a9b4-43a5-9912-04de274fe4ab-kube-api-access-jjlx8\") on node \"localhost\" DevicePath \"\"" Feb 13 19:04:14.117129 kubelet[2559]: I0213 19:04:14.116420 2559 scope.go:117] "RemoveContainer" containerID="2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36" Feb 13 19:04:14.119696 containerd[1457]: time="2025-02-13T19:04:14.119370808Z" level=info msg="RemoveContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\"" Feb 13 19:04:14.125204 containerd[1457]: time="2025-02-13T19:04:14.125165455Z" level=info msg="RemoveContainer for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" returns successfully" Feb 13 19:04:14.125599 kubelet[2559]: I0213 19:04:14.125567 2559 scope.go:117] "RemoveContainer" containerID="2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36" Feb 13 19:04:14.125920 containerd[1457]: time="2025-02-13T19:04:14.125884391Z" level=error msg="ContainerStatus for \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\": not found" Feb 13 19:04:14.131714 kubelet[2559]: E0213 19:04:14.131624 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\": not found" containerID="2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36" Feb 13 19:04:14.131846 kubelet[2559]: I0213 19:04:14.131712 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36"} err="failed to get container status \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\": rpc error: code = NotFound desc = an error occurred when try to find container \"2954bfa9b38ef2bbba47b7b0ca542a61ad3140e02e1c7370b2f42a9d2ff92f36\": not found" Feb 13 19:04:14.131846 kubelet[2559]: I0213 19:04:14.131797 2559 scope.go:117] "RemoveContainer" containerID="d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3" Feb 13 19:04:14.133935 containerd[1457]: time="2025-02-13T19:04:14.133879206Z" level=info msg="RemoveContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\"" Feb 13 19:04:14.150043 containerd[1457]: time="2025-02-13T19:04:14.149827836Z" level=info msg="RemoveContainer for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" returns successfully" Feb 13 19:04:14.150935 kubelet[2559]: I0213 19:04:14.150302 2559 scope.go:117] "RemoveContainer" containerID="9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c" Feb 13 19:04:14.152277 containerd[1457]: time="2025-02-13T19:04:14.152246929Z" level=info msg="RemoveContainer for \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\"" Feb 13 19:04:14.155004 containerd[1457]: time="2025-02-13T19:04:14.154901347Z" level=info msg="RemoveContainer for \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\" returns successfully" Feb 13 19:04:14.155163 kubelet[2559]: I0213 19:04:14.155087 2559 scope.go:117] "RemoveContainer" containerID="36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09" Feb 13 19:04:14.156327 containerd[1457]: time="2025-02-13T19:04:14.156053372Z" level=info msg="RemoveContainer for \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\"" Feb 13 19:04:14.158541 containerd[1457]: time="2025-02-13T19:04:14.158434024Z" level=info msg="RemoveContainer for \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\" returns successfully" Feb 13 19:04:14.158674 kubelet[2559]: I0213 19:04:14.158640 2559 scope.go:117] "RemoveContainer" containerID="fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267" Feb 13 19:04:14.159848 containerd[1457]: time="2025-02-13T19:04:14.159797694Z" level=info msg="RemoveContainer for \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\"" Feb 13 19:04:14.162437 containerd[1457]: time="2025-02-13T19:04:14.162401071Z" level=info msg="RemoveContainer for \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\" returns successfully" Feb 13 19:04:14.162676 kubelet[2559]: I0213 19:04:14.162633 2559 scope.go:117] "RemoveContainer" containerID="a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e" Feb 13 19:04:14.163774 containerd[1457]: time="2025-02-13T19:04:14.163741061Z" level=info msg="RemoveContainer for \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\"" Feb 13 19:04:14.166283 containerd[1457]: time="2025-02-13T19:04:14.166244475Z" level=info msg="RemoveContainer for \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\" returns successfully" Feb 13 19:04:14.166494 kubelet[2559]: I0213 19:04:14.166457 2559 scope.go:117] "RemoveContainer" containerID="d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3" Feb 13 19:04:14.166752 containerd[1457]: time="2025-02-13T19:04:14.166711286Z" level=error msg="ContainerStatus for \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\": not found" Feb 13 19:04:14.166891 kubelet[2559]: E0213 19:04:14.166861 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\": not found" containerID="d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3" Feb 13 19:04:14.166923 kubelet[2559]: I0213 19:04:14.166897 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3"} err="failed to get container status \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d09f9a3c1891afd4838df82ac8ef12f45512f442a49d820299cfdefd211e60a3\": not found" Feb 13 19:04:14.166923 kubelet[2559]: I0213 19:04:14.166920 2559 scope.go:117] "RemoveContainer" containerID="9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c" Feb 13 19:04:14.167114 containerd[1457]: time="2025-02-13T19:04:14.167074054Z" level=error msg="ContainerStatus for \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\": not found" Feb 13 19:04:14.167248 kubelet[2559]: E0213 19:04:14.167226 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\": not found" containerID="9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c" Feb 13 19:04:14.167288 kubelet[2559]: I0213 19:04:14.167259 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c"} err="failed to get container status \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9394794602a28a0fc0daef92190c6acd5ea10fb16ced0bc1a8a4133f4c5f4a7c\": not found" Feb 13 19:04:14.167315 kubelet[2559]: I0213 19:04:14.167290 2559 scope.go:117] "RemoveContainer" containerID="36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09" Feb 13 19:04:14.167577 containerd[1457]: time="2025-02-13T19:04:14.167545544Z" level=error msg="ContainerStatus for \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\": not found" Feb 13 19:04:14.167729 kubelet[2559]: E0213 19:04:14.167705 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\": not found" containerID="36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09" Feb 13 19:04:14.167769 kubelet[2559]: I0213 19:04:14.167751 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09"} err="failed to get container status \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\": rpc error: code = NotFound desc = an error occurred when try to find container \"36251ff2d574c1b2d238c7341c8634e0f84b37c2c72972af1825f2b95a1c1b09\": not found" Feb 13 19:04:14.167793 kubelet[2559]: I0213 19:04:14.167771 2559 scope.go:117] "RemoveContainer" containerID="fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267" Feb 13 19:04:14.167970 containerd[1457]: time="2025-02-13T19:04:14.167936593Z" level=error msg="ContainerStatus for \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\": not found" Feb 13 19:04:14.168057 kubelet[2559]: E0213 19:04:14.168041 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\": not found" containerID="fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267" Feb 13 19:04:14.168099 kubelet[2559]: I0213 19:04:14.168063 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267"} err="failed to get container status \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd430922f24278f41c0a5d5598be07d36fed4f074c4f5c87c19c8c3223661267\": not found" Feb 13 19:04:14.168099 kubelet[2559]: I0213 19:04:14.168075 2559 scope.go:117] "RemoveContainer" containerID="a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e" Feb 13 19:04:14.168349 containerd[1457]: time="2025-02-13T19:04:14.168305801Z" level=error msg="ContainerStatus for \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\": not found" Feb 13 19:04:14.168470 kubelet[2559]: E0213 19:04:14.168450 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\": not found" containerID="a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e" Feb 13 19:04:14.168496 kubelet[2559]: I0213 19:04:14.168474 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e"} err="failed to get container status \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2f154afe36ceace0e05cfc5fb5f40eaf832069a4d856dad28ee3668595cad2e\": not found" Feb 13 19:04:14.572611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b79443f8d0356245b4045ec06799414f7515ae6c8b3280652bc668adb1d73ab8-rootfs.mount: Deactivated successfully. Feb 13 19:04:14.572725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd8a32022674ad48d38992c0ece7d87898d09f9392d038593fcbdd23e751fb95-rootfs.mount: Deactivated successfully. Feb 13 19:04:14.572797 systemd[1]: var-lib-kubelet-pods-c098cca6\x2da9b4\x2d43a5\x2d9912\x2d04de274fe4ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djjlx8.mount: Deactivated successfully. Feb 13 19:04:14.572852 systemd[1]: var-lib-kubelet-pods-3ab3793a\x2d0297\x2d4286\x2d8be4\x2dd42700ea5ebc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9czt.mount: Deactivated successfully. Feb 13 19:04:14.572915 systemd[1]: var-lib-kubelet-pods-3ab3793a\x2d0297\x2d4286\x2d8be4\x2dd42700ea5ebc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:04:14.572970 systemd[1]: var-lib-kubelet-pods-3ab3793a\x2d0297\x2d4286\x2d8be4\x2dd42700ea5ebc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:04:14.929168 kubelet[2559]: E0213 19:04:14.929042 2559 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:15.474761 sshd[4218]: Connection closed by 10.0.0.1 port 51990 Feb 13 19:04:15.475563 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:15.489642 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:51990.service: Deactivated successfully. Feb 13 19:04:15.491520 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:04:15.491738 systemd[1]: session-23.scope: Consumed 1.446s CPU time, 29.1M memory peak. Feb 13 19:04:15.494238 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:04:15.508461 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:44518.service - OpenSSH per-connection server daemon (10.0.0.1:44518). Feb 13 19:04:15.509844 systemd-logind[1444]: Removed session 23. Feb 13 19:04:15.548868 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 44518 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:15.550348 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:15.555150 systemd-logind[1444]: New session 24 of user core. Feb 13 19:04:15.560266 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:04:15.879188 kubelet[2559]: I0213 19:04:15.879150 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab3793a-0297-4286-8be4-d42700ea5ebc" path="/var/lib/kubelet/pods/3ab3793a-0297-4286-8be4-d42700ea5ebc/volumes" Feb 13 19:04:15.879732 kubelet[2559]: I0213 19:04:15.879710 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c098cca6-a9b4-43a5-9912-04de274fe4ab" path="/var/lib/kubelet/pods/c098cca6-a9b4-43a5-9912-04de274fe4ab/volumes" Feb 13 19:04:16.398857 sshd[4378]: Connection closed by 10.0.0.1 port 44518 Feb 13 19:04:16.400053 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:16.411888 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:44518.service: Deactivated successfully. Feb 13 19:04:16.416773 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:04:16.419568 kubelet[2559]: I0213 19:04:16.419451 2559 memory_manager.go:355] "RemoveStaleState removing state" podUID="3ab3793a-0297-4286-8be4-d42700ea5ebc" containerName="cilium-agent" Feb 13 19:04:16.419568 kubelet[2559]: I0213 19:04:16.419482 2559 memory_manager.go:355] "RemoveStaleState removing state" podUID="c098cca6-a9b4-43a5-9912-04de274fe4ab" containerName="cilium-operator" Feb 13 19:04:16.421567 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:04:16.432546 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Feb 13 19:04:16.436387 systemd-logind[1444]: Removed session 24. Feb 13 19:04:16.446256 systemd[1]: Created slice kubepods-burstable-pod37701ae4_a028_4022_9950_cf3a05dbdc42.slice - libcontainer container kubepods-burstable-pod37701ae4_a028_4022_9950_cf3a05dbdc42.slice. Feb 13 19:04:16.505667 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 44532 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:16.507240 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:16.513312 systemd-logind[1444]: New session 25 of user core. Feb 13 19:04:16.523295 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:04:16.565393 kubelet[2559]: I0213 19:04:16.565345 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/37701ae4-a028-4022-9950-cf3a05dbdc42-cilium-ipsec-secrets\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565393 kubelet[2559]: I0213 19:04:16.565392 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-host-proc-sys-kernel\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565416 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-etc-cni-netd\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565435 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37701ae4-a028-4022-9950-cf3a05dbdc42-hubble-tls\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565453 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-cni-path\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565469 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-lib-modules\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565485 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-host-proc-sys-net\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565567 kubelet[2559]: I0213 19:04:16.565501 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6hd\" (UniqueName: \"kubernetes.io/projected/37701ae4-a028-4022-9950-cf3a05dbdc42-kube-api-access-zw6hd\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565529 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-cilium-run\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565547 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-cilium-cgroup\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565566 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-bpf-maps\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565590 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-xtables-lock\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565604 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37701ae4-a028-4022-9950-cf3a05dbdc42-clustermesh-secrets\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565731 kubelet[2559]: I0213 19:04:16.565623 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37701ae4-a028-4022-9950-cf3a05dbdc42-cilium-config-path\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.565874 kubelet[2559]: I0213 19:04:16.565642 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37701ae4-a028-4022-9950-cf3a05dbdc42-hostproc\") pod \"cilium-9g5bs\" (UID: \"37701ae4-a028-4022-9950-cf3a05dbdc42\") " pod="kube-system/cilium-9g5bs" Feb 13 19:04:16.573571 sshd[4392]: Connection closed by 10.0.0.1 port 44532 Feb 13 19:04:16.574155 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:16.583721 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:44532.service: Deactivated successfully. Feb 13 19:04:16.585654 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:04:16.587351 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:04:16.596446 systemd[1]: Started sshd@25-10.0.0.42:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Feb 13 19:04:16.597962 systemd-logind[1444]: Removed session 25. Feb 13 19:04:16.635640 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:04:16.636989 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:16.642393 systemd-logind[1444]: New session 26 of user core. Feb 13 19:04:16.648265 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:04:16.755303 kubelet[2559]: E0213 19:04:16.755230 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.759115 containerd[1457]: time="2025-02-13T19:04:16.757770731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g5bs,Uid:37701ae4-a028-4022-9950-cf3a05dbdc42,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:16.785231 containerd[1457]: time="2025-02-13T19:04:16.784722997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:16.785231 containerd[1457]: time="2025-02-13T19:04:16.785175047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:16.785231 containerd[1457]: time="2025-02-13T19:04:16.785189847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.785497 containerd[1457]: time="2025-02-13T19:04:16.785294769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.800265 systemd[1]: Started cri-containerd-6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247.scope - libcontainer container 6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247. Feb 13 19:04:16.821334 containerd[1457]: time="2025-02-13T19:04:16.821266258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g5bs,Uid:37701ae4-a028-4022-9950-cf3a05dbdc42,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\"" Feb 13 19:04:16.822854 kubelet[2559]: E0213 19:04:16.822205 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.826339 containerd[1457]: time="2025-02-13T19:04:16.826296840Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:16.837329 containerd[1457]: time="2025-02-13T19:04:16.837269663Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e\"" Feb 13 19:04:16.838214 containerd[1457]: time="2025-02-13T19:04:16.838158361Z" level=info msg="StartContainer for \"99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e\"" Feb 13 19:04:16.866436 systemd[1]: Started cri-containerd-99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e.scope - libcontainer container 99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e. Feb 13 19:04:16.888326 containerd[1457]: time="2025-02-13T19:04:16.888275417Z" level=info msg="StartContainer for \"99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e\" returns successfully" Feb 13 19:04:16.904234 systemd[1]: cri-containerd-99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e.scope: Deactivated successfully. Feb 13 19:04:16.939743 containerd[1457]: time="2025-02-13T19:04:16.939667980Z" level=info msg="shim disconnected" id=99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e namespace=k8s.io Feb 13 19:04:16.939743 containerd[1457]: time="2025-02-13T19:04:16.939736141Z" level=warning msg="cleaning up after shim disconnected" id=99d202340a7a1882cedd2fb88ad12b0d1ed3e7f89c711a4bdfee9f307f23f72e namespace=k8s.io Feb 13 19:04:16.939743 containerd[1457]: time="2025-02-13T19:04:16.939745541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:17.130832 kubelet[2559]: E0213 19:04:17.130704 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:17.133724 containerd[1457]: time="2025-02-13T19:04:17.133607012Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:17.145445 containerd[1457]: time="2025-02-13T19:04:17.145389162Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb\"" Feb 13 19:04:17.146180 containerd[1457]: time="2025-02-13T19:04:17.146142016Z" level=info msg="StartContainer for \"be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb\"" Feb 13 19:04:17.173321 systemd[1]: Started cri-containerd-be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb.scope - libcontainer container be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb. Feb 13 19:04:17.198434 containerd[1457]: time="2025-02-13T19:04:17.198377235Z" level=info msg="StartContainer for \"be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb\" returns successfully" Feb 13 19:04:17.208756 systemd[1]: cri-containerd-be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb.scope: Deactivated successfully. Feb 13 19:04:17.231913 containerd[1457]: time="2025-02-13T19:04:17.231837607Z" level=info msg="shim disconnected" id=be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb namespace=k8s.io Feb 13 19:04:17.231913 containerd[1457]: time="2025-02-13T19:04:17.231907368Z" level=warning msg="cleaning up after shim disconnected" id=be4dc4f14dfc805755d41263c8d1b1b2d7123bd34c4bb982aa45a0d984953ceb namespace=k8s.io Feb 13 19:04:17.231913 containerd[1457]: time="2025-02-13T19:04:17.231917008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.137307 kubelet[2559]: E0213 19:04:18.137155 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:18.139435 containerd[1457]: time="2025-02-13T19:04:18.139385515Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:18.183044 containerd[1457]: time="2025-02-13T19:04:18.182901250Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8\"" Feb 13 19:04:18.184612 containerd[1457]: time="2025-02-13T19:04:18.183410460Z" level=info msg="StartContainer for \"89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8\"" Feb 13 19:04:18.216315 systemd[1]: Started cri-containerd-89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8.scope - libcontainer container 89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8. Feb 13 19:04:18.247221 containerd[1457]: time="2025-02-13T19:04:18.247175215Z" level=info msg="StartContainer for \"89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8\" returns successfully" Feb 13 19:04:18.249944 systemd[1]: cri-containerd-89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8.scope: Deactivated successfully. Feb 13 19:04:18.282581 containerd[1457]: time="2025-02-13T19:04:18.282358434Z" level=info msg="shim disconnected" id=89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8 namespace=k8s.io Feb 13 19:04:18.282581 containerd[1457]: time="2025-02-13T19:04:18.282416835Z" level=warning msg="cleaning up after shim disconnected" id=89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8 namespace=k8s.io Feb 13 19:04:18.282581 containerd[1457]: time="2025-02-13T19:04:18.282427355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.670413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89eb90e947a19dedc04dd2a326429c7e607074623c9e6cebdec523513612d9a8-rootfs.mount: Deactivated successfully. Feb 13 19:04:19.141874 kubelet[2559]: E0213 19:04:19.141815 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:19.145258 containerd[1457]: time="2025-02-13T19:04:19.145215297Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:19.158533 containerd[1457]: time="2025-02-13T19:04:19.158374614Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3\"" Feb 13 19:04:19.159246 containerd[1457]: time="2025-02-13T19:04:19.159191589Z" level=info msg="StartContainer for \"55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3\"" Feb 13 19:04:19.183041 systemd[1]: run-containerd-runc-k8s.io-55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3-runc.CVRAmU.mount: Deactivated successfully. Feb 13 19:04:19.195325 systemd[1]: Started cri-containerd-55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3.scope - libcontainer container 55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3. Feb 13 19:04:19.222156 systemd[1]: cri-containerd-55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3.scope: Deactivated successfully. Feb 13 19:04:19.223534 containerd[1457]: time="2025-02-13T19:04:19.223489226Z" level=info msg="StartContainer for \"55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3\" returns successfully" Feb 13 19:04:19.249154 containerd[1457]: time="2025-02-13T19:04:19.248458836Z" level=info msg="shim disconnected" id=55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3 namespace=k8s.io Feb 13 19:04:19.249154 containerd[1457]: time="2025-02-13T19:04:19.248714320Z" level=warning msg="cleaning up after shim disconnected" id=55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3 namespace=k8s.io Feb 13 19:04:19.249154 containerd[1457]: time="2025-02-13T19:04:19.248725121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:19.670494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55675a77e2020cf7bcdd93c8cd589f66e1c1c3dfb21a9d58b21e74e622dc9ae3-rootfs.mount: Deactivated successfully. Feb 13 19:04:19.930394 kubelet[2559]: E0213 19:04:19.930209 2559 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:20.154827 kubelet[2559]: E0213 19:04:20.152146 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:20.162839 containerd[1457]: time="2025-02-13T19:04:20.159852650Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:20.204929 containerd[1457]: time="2025-02-13T19:04:20.204772387Z" level=info msg="CreateContainer within sandbox \"6ff42d469f3942f735181a29261c329b82b3c1581890170acfea910d47bba247\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ffd86090e257c9c4f2bb29769eb9312977d16589f3d17e78a32b0e6612130206\"" Feb 13 19:04:20.206251 containerd[1457]: time="2025-02-13T19:04:20.206202012Z" level=info msg="StartContainer for \"ffd86090e257c9c4f2bb29769eb9312977d16589f3d17e78a32b0e6612130206\"" Feb 13 19:04:20.236305 systemd[1]: Started cri-containerd-ffd86090e257c9c4f2bb29769eb9312977d16589f3d17e78a32b0e6612130206.scope - libcontainer container ffd86090e257c9c4f2bb29769eb9312977d16589f3d17e78a32b0e6612130206. Feb 13 19:04:20.262441 containerd[1457]: time="2025-02-13T19:04:20.262388463Z" level=info msg="StartContainer for \"ffd86090e257c9c4f2bb29769eb9312977d16589f3d17e78a32b0e6612130206\" returns successfully" Feb 13 19:04:20.547662 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:21.157960 kubelet[2559]: E0213 19:04:21.157097 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:21.173235 kubelet[2559]: I0213 19:04:21.173113 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9g5bs" podStartSLOduration=5.173095973 podStartE2EDuration="5.173095973s" podCreationTimestamp="2025-02-13 19:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:21.172787527 +0000 UTC m=+81.377874131" watchObservedRunningTime="2025-02-13 19:04:21.173095973 +0000 UTC m=+81.378182617" Feb 13 19:04:21.383944 kubelet[2559]: I0213 19:04:21.383887 2559 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:04:21Z","lastTransitionTime":"2025-02-13T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:04:22.756246 kubelet[2559]: E0213 19:04:22.756201 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:23.514238 systemd-networkd[1400]: lxc_health: Link UP Feb 13 19:04:23.522166 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 19:04:24.757112 kubelet[2559]: E0213 19:04:24.756803 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:24.877315 kubelet[2559]: E0213 19:04:24.877257 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:25.168691 kubelet[2559]: E0213 19:04:25.168532 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:25.265307 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 19:04:26.169875 kubelet[2559]: E0213 19:04:26.169835 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:29.496663 sshd[4401]: Connection closed by 10.0.0.1 port 44534 Feb 13 19:04:29.497359 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:29.501709 systemd[1]: sshd@25-10.0.0.42:22-10.0.0.1:44534.service: Deactivated successfully. Feb 13 19:04:29.504653 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:04:29.505358 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:04:29.506428 systemd-logind[1444]: Removed session 26.