Mar 20 21:19:55.950973 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 20 19:36:47 -00 2025 Mar 20 21:19:55.951010 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:19:55.951023 kernel: BIOS-provided physical RAM map: Mar 20 21:19:55.951030 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 20 21:19:55.951037 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 20 21:19:55.951044 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 20 21:19:55.951052 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 20 21:19:55.951059 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 20 21:19:55.951085 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 20 21:19:55.951093 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 20 21:19:55.951100 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 20 21:19:55.951114 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 20 21:19:55.951121 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 20 21:19:55.951128 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 20 21:19:55.951140 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 20 21:19:55.951148 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 20 21:19:55.951161 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 20 21:19:55.951169 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 20 21:19:55.951177 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 20 21:19:55.951184 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 20 21:19:55.951192 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 20 21:19:55.951200 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 20 21:19:55.951208 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 20 21:19:55.951215 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:19:55.951223 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 20 21:19:55.951233 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:19:55.951241 kernel: NX (Execute Disable) protection: active Mar 20 21:19:55.951252 kernel: APIC: Static calls initialized Mar 20 21:19:55.951259 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 20 21:19:55.951267 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 20 21:19:55.951275 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 20 21:19:55.951283 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 20 21:19:55.951294 kernel: extended physical RAM map: Mar 20 21:19:55.951305 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 20 21:19:55.951317 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 20 21:19:55.951333 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 20 21:19:55.951346 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 20 21:19:55.951359 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 20 21:19:55.951372 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 20 21:19:55.951387 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 20 21:19:55.951399 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 20 21:19:55.951407 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 20 21:19:55.951419 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 20 21:19:55.951427 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 20 21:19:55.951435 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 20 21:19:55.951446 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 20 21:19:55.951455 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 20 21:19:55.951465 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 20 21:19:55.951474 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 20 21:19:55.951482 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 20 21:19:55.951490 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 20 21:19:55.951499 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 20 21:19:55.951507 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 20 21:19:55.951515 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 20 21:19:55.951524 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 20 21:19:55.951535 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 20 21:19:55.951543 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 20 21:19:55.951553 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:19:55.951565 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 20 21:19:55.951573 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:19:55.951581 kernel: efi: EFI v2.7 by EDK II Mar 20 21:19:55.951590 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 20 21:19:55.951598 kernel: random: crng init done Mar 20 21:19:55.951606 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 20 21:19:55.951626 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 20 21:19:55.951642 kernel: secureboot: Secure boot disabled Mar 20 21:19:55.951657 kernel: SMBIOS 2.8 present. Mar 20 21:19:55.951665 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 20 21:19:55.951674 kernel: Hypervisor detected: KVM Mar 20 21:19:55.951682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 20 21:19:55.951699 kernel: kvm-clock: using sched offset of 3741736380 cycles Mar 20 21:19:55.951708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 20 21:19:55.951717 kernel: tsc: Detected 2794.746 MHz processor Mar 20 21:19:55.951725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 20 21:19:55.951734 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 20 21:19:55.951743 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 20 21:19:55.951754 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 20 21:19:55.951762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 20 21:19:55.951771 kernel: Using GB pages for direct mapping Mar 20 21:19:55.951779 kernel: ACPI: Early table checksum verification disabled Mar 20 21:19:55.951788 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 20 21:19:55.951796 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:19:55.951805 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951813 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951822 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 20 21:19:55.951833 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951841 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951849 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951858 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:19:55.951866 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 20 21:19:55.951875 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 20 21:19:55.951883 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 20 21:19:55.951892 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 20 21:19:55.951900 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 20 21:19:55.951911 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 20 21:19:55.951919 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 20 21:19:55.951928 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 20 21:19:55.951939 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 20 21:19:55.951972 kernel: No NUMA configuration found Mar 20 21:19:55.951981 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 20 21:19:55.951989 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 20 21:19:55.951998 kernel: Zone ranges: Mar 20 21:19:55.952006 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 20 21:19:55.952018 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 20 21:19:55.952028 kernel: Normal empty Mar 20 21:19:55.952037 kernel: Movable zone start for each node Mar 20 21:19:55.952045 kernel: Early memory node ranges Mar 20 21:19:55.952053 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 20 21:19:55.952062 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 20 21:19:55.952070 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 20 21:19:55.952079 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 20 21:19:55.952087 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 20 21:19:55.952095 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 20 21:19:55.952106 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 20 21:19:55.952115 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 20 21:19:55.952123 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 20 21:19:55.952131 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:19:55.952140 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 20 21:19:55.952156 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 20 21:19:55.952167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:19:55.952176 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 20 21:19:55.952185 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 20 21:19:55.952194 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 20 21:19:55.952205 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 20 21:19:55.952213 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 20 21:19:55.952225 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 20 21:19:55.952233 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 20 21:19:55.952242 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 20 21:19:55.952252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 20 21:19:55.952260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 20 21:19:55.952272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 20 21:19:55.952280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 20 21:19:55.952289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 20 21:19:55.952298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 20 21:19:55.952307 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 20 21:19:55.952316 kernel: TSC deadline timer available Mar 20 21:19:55.952325 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 20 21:19:55.952333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 20 21:19:55.952342 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 20 21:19:55.952353 kernel: kvm-guest: setup PV sched yield Mar 20 21:19:55.952362 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 20 21:19:55.952371 kernel: Booting paravirtualized kernel on KVM Mar 20 21:19:55.952380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 20 21:19:55.952389 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 20 21:19:55.952398 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 20 21:19:55.952406 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 20 21:19:55.952415 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 20 21:19:55.952424 kernel: kvm-guest: PV spinlocks enabled Mar 20 21:19:55.952435 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 20 21:19:55.952445 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:19:55.952454 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:19:55.952465 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:19:55.952474 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:19:55.952483 kernel: Fallback order for Node 0: 0 Mar 20 21:19:55.952492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 20 21:19:55.952501 kernel: Policy zone: DMA32 Mar 20 21:19:55.952512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:19:55.952521 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 179872K reserved, 0K cma-reserved) Mar 20 21:19:55.952530 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:19:55.952539 kernel: ftrace: allocating 37985 entries in 149 pages Mar 20 21:19:55.952548 kernel: ftrace: allocated 149 pages with 4 groups Mar 20 21:19:55.952557 kernel: Dynamic Preempt: voluntary Mar 20 21:19:55.952565 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:19:55.952575 kernel: rcu: RCU event tracing is enabled. Mar 20 21:19:55.952584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:19:55.952595 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:19:55.952604 kernel: Rude variant of Tasks RCU enabled. Mar 20 21:19:55.952613 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:19:55.952622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:19:55.952631 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:19:55.952640 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 20 21:19:55.952649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:19:55.952657 kernel: Console: colour dummy device 80x25 Mar 20 21:19:55.952666 kernel: printk: console [ttyS0] enabled Mar 20 21:19:55.952677 kernel: ACPI: Core revision 20230628 Mar 20 21:19:55.952696 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 20 21:19:55.952706 kernel: APIC: Switch to symmetric I/O mode setup Mar 20 21:19:55.952714 kernel: x2apic enabled Mar 20 21:19:55.952723 kernel: APIC: Switched APIC routing to: physical x2apic Mar 20 21:19:55.952734 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 20 21:19:55.952743 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 20 21:19:55.952752 kernel: kvm-guest: setup PV IPIs Mar 20 21:19:55.952761 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 20 21:19:55.952772 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 20 21:19:55.952781 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Mar 20 21:19:55.952790 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 20 21:19:55.952799 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 20 21:19:55.952813 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 20 21:19:55.952831 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 20 21:19:55.952856 kernel: Spectre V2 : Mitigation: Retpolines Mar 20 21:19:55.952880 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 20 21:19:55.952889 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 20 21:19:55.952902 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 20 21:19:55.952911 kernel: RETBleed: Mitigation: untrained return thunk Mar 20 21:19:55.952920 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 20 21:19:55.952929 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 20 21:19:55.952938 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 20 21:19:55.952975 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 20 21:19:55.952985 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 20 21:19:55.952994 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 20 21:19:55.953007 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 20 21:19:55.953015 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 20 21:19:55.953024 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 20 21:19:55.953033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 20 21:19:55.953042 kernel: Freeing SMP alternatives memory: 32K Mar 20 21:19:55.953051 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:19:55.953060 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:19:55.953069 kernel: landlock: Up and running. Mar 20 21:19:55.953077 kernel: SELinux: Initializing. Mar 20 21:19:55.953089 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:19:55.953098 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:19:55.953107 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 20 21:19:55.953116 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:19:55.953125 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:19:55.953134 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:19:55.953143 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 20 21:19:55.953151 kernel: ... version: 0 Mar 20 21:19:55.953160 kernel: ... bit width: 48 Mar 20 21:19:55.953171 kernel: ... generic registers: 6 Mar 20 21:19:55.953180 kernel: ... value mask: 0000ffffffffffff Mar 20 21:19:55.953189 kernel: ... max period: 00007fffffffffff Mar 20 21:19:55.953198 kernel: ... fixed-purpose events: 0 Mar 20 21:19:55.953206 kernel: ... event mask: 000000000000003f Mar 20 21:19:55.953215 kernel: signal: max sigframe size: 1776 Mar 20 21:19:55.953224 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:19:55.953233 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:19:55.953242 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:19:55.953253 kernel: smpboot: x86: Booting SMP configuration: Mar 20 21:19:55.953262 kernel: .... node #0, CPUs: #1 #2 #3 Mar 20 21:19:55.953270 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:19:55.953279 kernel: smpboot: Max logical packages: 1 Mar 20 21:19:55.953288 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Mar 20 21:19:55.953297 kernel: devtmpfs: initialized Mar 20 21:19:55.953306 kernel: x86/mm: Memory block size: 128MB Mar 20 21:19:55.953315 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 20 21:19:55.953324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 20 21:19:55.953332 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 20 21:19:55.953344 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 20 21:19:55.953353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 20 21:19:55.953362 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 20 21:19:55.953371 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:19:55.953379 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:19:55.953388 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:19:55.953397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:19:55.953406 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:19:55.953417 kernel: audit: type=2000 audit(1742505595.417:1): state=initialized audit_enabled=0 res=1 Mar 20 21:19:55.953426 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:19:55.953435 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 20 21:19:55.953444 kernel: cpuidle: using governor menu Mar 20 21:19:55.953453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:19:55.953462 kernel: dca service started, version 1.12.1 Mar 20 21:19:55.953470 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 20 21:19:55.953479 kernel: PCI: Using configuration type 1 for base access Mar 20 21:19:55.953488 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 20 21:19:55.953499 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:19:55.953508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:19:55.953518 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:19:55.953526 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:19:55.953535 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:19:55.953544 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:19:55.953553 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:19:55.953561 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:19:55.953570 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:19:55.953581 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 20 21:19:55.953590 kernel: ACPI: Interpreter enabled Mar 20 21:19:55.953599 kernel: ACPI: PM: (supports S0 S3 S5) Mar 20 21:19:55.953608 kernel: ACPI: Using IOAPIC for interrupt routing Mar 20 21:19:55.953616 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 20 21:19:55.953625 kernel: PCI: Using E820 reservations for host bridge windows Mar 20 21:19:55.953634 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 20 21:19:55.953643 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:19:55.953923 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:19:55.954109 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 20 21:19:55.954242 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 20 21:19:55.954254 kernel: PCI host bridge to bus 0000:00 Mar 20 21:19:55.954398 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 20 21:19:55.954521 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 20 21:19:55.954643 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 20 21:19:55.954789 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 20 21:19:55.954912 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 20 21:19:55.955053 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 20 21:19:55.955175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:19:55.955336 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 20 21:19:55.955486 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 20 21:19:55.955618 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 20 21:19:55.955767 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 20 21:19:55.955898 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 20 21:19:55.956046 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 20 21:19:55.956177 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 20 21:19:55.956357 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:19:55.956492 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 20 21:19:55.956628 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 20 21:19:55.956770 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 20 21:19:55.956917 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 20 21:19:55.957074 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 20 21:19:55.957208 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 20 21:19:55.957357 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 20 21:19:55.957510 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 20 21:19:55.957649 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 20 21:19:55.957790 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 20 21:19:55.957922 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 20 21:19:55.958074 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 20 21:19:55.958225 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 20 21:19:55.958359 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 20 21:19:55.958506 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 20 21:19:55.958645 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 20 21:19:55.958786 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 20 21:19:55.958936 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 20 21:19:55.959110 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 20 21:19:55.959124 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 20 21:19:55.959133 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 20 21:19:55.959142 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 20 21:19:55.959155 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 20 21:19:55.959165 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 20 21:19:55.959174 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 20 21:19:55.959182 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 20 21:19:55.959191 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 20 21:19:55.959200 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 20 21:19:55.959209 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 20 21:19:55.959218 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 20 21:19:55.959227 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 20 21:19:55.959238 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 20 21:19:55.959247 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 20 21:19:55.959256 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 20 21:19:55.959264 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 20 21:19:55.959273 kernel: iommu: Default domain type: Translated Mar 20 21:19:55.959282 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 20 21:19:55.959291 kernel: efivars: Registered efivars operations Mar 20 21:19:55.959300 kernel: PCI: Using ACPI for IRQ routing Mar 20 21:19:55.959309 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 20 21:19:55.959345 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 20 21:19:55.959354 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 20 21:19:55.959363 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 20 21:19:55.959371 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 20 21:19:55.959380 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 20 21:19:55.959389 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 20 21:19:55.959398 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 20 21:19:55.959407 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 20 21:19:55.959542 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 20 21:19:55.959678 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 20 21:19:55.959821 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 20 21:19:55.959833 kernel: vgaarb: loaded Mar 20 21:19:55.959842 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 20 21:19:55.959851 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 20 21:19:55.959860 kernel: clocksource: Switched to clocksource kvm-clock Mar 20 21:19:55.959869 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:19:55.959878 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:19:55.959887 kernel: pnp: PnP ACPI init Mar 20 21:19:55.960166 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 20 21:19:55.960181 kernel: pnp: PnP ACPI: found 6 devices Mar 20 21:19:55.960190 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 20 21:19:55.960200 kernel: NET: Registered PF_INET protocol family Mar 20 21:19:55.960229 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:19:55.960241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:19:55.960250 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:19:55.960260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:19:55.960272 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:19:55.960281 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:19:55.960291 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:19:55.960300 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:19:55.960309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:19:55.960319 kernel: NET: Registered PF_XDP protocol family Mar 20 21:19:55.960453 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 20 21:19:55.960584 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 20 21:19:55.960721 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 20 21:19:55.960842 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 20 21:19:55.960995 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 20 21:19:55.961119 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 20 21:19:55.961239 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 20 21:19:55.961358 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 20 21:19:55.961370 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:19:55.961379 kernel: Initialise system trusted keyrings Mar 20 21:19:55.961393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:19:55.961403 kernel: Key type asymmetric registered Mar 20 21:19:55.961412 kernel: Asymmetric key parser 'x509' registered Mar 20 21:19:55.961421 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 20 21:19:55.961431 kernel: io scheduler mq-deadline registered Mar 20 21:19:55.961440 kernel: io scheduler kyber registered Mar 20 21:19:55.961449 kernel: io scheduler bfq registered Mar 20 21:19:55.961459 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 20 21:19:55.961468 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 20 21:19:55.961481 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 20 21:19:55.961492 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 20 21:19:55.961502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:19:55.961511 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 20 21:19:55.961521 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 20 21:19:55.961530 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 20 21:19:55.961542 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 20 21:19:55.961727 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 20 21:19:55.961855 kernel: rtc_cmos 00:04: registered as rtc0 Mar 20 21:19:55.961868 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 20 21:19:55.962038 kernel: rtc_cmos 00:04: setting system clock to 2025-03-20T21:19:55 UTC (1742505595) Mar 20 21:19:55.962166 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 20 21:19:55.962178 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 20 21:19:55.962187 kernel: efifb: probing for efifb Mar 20 21:19:55.962201 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 20 21:19:55.962210 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 20 21:19:55.962220 kernel: efifb: scrolling: redraw Mar 20 21:19:55.962229 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 20 21:19:55.962238 kernel: Console: switching to colour frame buffer device 160x50 Mar 20 21:19:55.962248 kernel: fb0: EFI VGA frame buffer device Mar 20 21:19:55.962257 kernel: pstore: Using crash dump compression: deflate Mar 20 21:19:55.962267 kernel: pstore: Registered efi_pstore as persistent store backend Mar 20 21:19:55.962276 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:19:55.962288 kernel: Segment Routing with IPv6 Mar 20 21:19:55.962297 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:19:55.962309 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:19:55.962318 kernel: Key type dns_resolver registered Mar 20 21:19:55.962327 kernel: IPI shorthand broadcast: enabled Mar 20 21:19:55.962337 kernel: sched_clock: Marking stable (1172003291, 153523948)->(1391469787, -65942548) Mar 20 21:19:55.962346 kernel: registered taskstats version 1 Mar 20 21:19:55.962356 kernel: Loading compiled-in X.509 certificates Mar 20 21:19:55.962365 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 9e7923b67df1c6f0613bc4380f7ea8de9ce851ac' Mar 20 21:19:55.962377 kernel: Key type .fscrypt registered Mar 20 21:19:55.962386 kernel: Key type fscrypt-provisioning registered Mar 20 21:19:55.962395 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:19:55.962405 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:19:55.962414 kernel: ima: No architecture policies found Mar 20 21:19:55.962424 kernel: clk: Disabling unused clocks Mar 20 21:19:55.962433 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 20 21:19:55.962443 kernel: Write protecting the kernel read-only data: 40960k Mar 20 21:19:55.962452 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 20 21:19:55.962464 kernel: Run /init as init process Mar 20 21:19:55.962473 kernel: with arguments: Mar 20 21:19:55.962482 kernel: /init Mar 20 21:19:55.962491 kernel: with environment: Mar 20 21:19:55.962500 kernel: HOME=/ Mar 20 21:19:55.962509 kernel: TERM=linux Mar 20 21:19:55.962518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:19:55.962532 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:19:55.962547 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:19:55.962557 systemd[1]: Detected virtualization kvm. Mar 20 21:19:55.962567 systemd[1]: Detected architecture x86-64. Mar 20 21:19:55.962577 systemd[1]: Running in initrd. Mar 20 21:19:55.962586 systemd[1]: No hostname configured, using default hostname. Mar 20 21:19:55.962596 systemd[1]: Hostname set to . Mar 20 21:19:55.962606 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:19:55.962616 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:19:55.962628 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:19:55.962638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:19:55.962648 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:19:55.962658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:19:55.962668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:19:55.962679 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:19:55.962700 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:19:55.962712 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:19:55.962722 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:19:55.962732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:19:55.962742 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:19:55.962752 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:19:55.962762 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:19:55.962772 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:19:55.962782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:19:55.962794 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:19:55.962804 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:19:55.962813 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:19:55.962823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:19:55.962833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:19:55.962843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:19:55.962853 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:19:55.962863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:19:55.962872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:19:55.962885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:19:55.962895 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:19:55.962904 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:19:55.962914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:19:55.962924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:19:55.962934 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:19:55.962944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:19:55.962987 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:19:55.962997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:19:55.963036 systemd-journald[192]: Collecting audit messages is disabled. Mar 20 21:19:55.963061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:19:55.963072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:19:55.963082 systemd-journald[192]: Journal started Mar 20 21:19:55.963105 systemd-journald[192]: Runtime Journal (/run/log/journal/ee18de08850845a39eabb6967568c6a5) is 6M, max 48.2M, 42.2M free. Mar 20 21:19:55.951664 systemd-modules-load[194]: Inserted module 'overlay' Mar 20 21:19:55.966554 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:19:55.966782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:19:55.970053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:19:55.973092 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:19:55.979972 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:19:55.982142 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 20 21:19:55.983003 kernel: Bridge firewalling registered Mar 20 21:19:55.989249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:19:55.991318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:19:55.994569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:19:55.997622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:19:56.000256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:19:56.003161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:19:56.010640 dracut-cmdline[223]: dracut-dracut-053 Mar 20 21:19:56.011865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:19:56.013922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:19:56.016247 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:19:56.063401 systemd-resolved[241]: Positive Trust Anchors: Mar 20 21:19:56.063427 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:19:56.063458 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:19:56.066128 systemd-resolved[241]: Defaulting to hostname 'linux'. Mar 20 21:19:56.067399 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:19:56.073260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:19:56.098056 kernel: SCSI subsystem initialized Mar 20 21:19:56.107006 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:19:56.117997 kernel: iscsi: registered transport (tcp) Mar 20 21:19:56.142992 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:19:56.143062 kernel: QLogic iSCSI HBA Driver Mar 20 21:19:56.200177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:19:56.202069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:19:56.243295 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:19:56.243344 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:19:56.244380 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:19:56.290007 kernel: raid6: avx2x4 gen() 29520 MB/s Mar 20 21:19:56.306982 kernel: raid6: avx2x2 gen() 29455 MB/s Mar 20 21:19:56.324110 kernel: raid6: avx2x1 gen() 25220 MB/s Mar 20 21:19:56.324137 kernel: raid6: using algorithm avx2x4 gen() 29520 MB/s Mar 20 21:19:56.342208 kernel: raid6: .... xor() 7798 MB/s, rmw enabled Mar 20 21:19:56.342235 kernel: raid6: using avx2x2 recovery algorithm Mar 20 21:19:56.367980 kernel: xor: automatically using best checksumming function avx Mar 20 21:19:56.533976 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:19:56.547256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:19:56.549606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:19:56.580024 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 20 21:19:56.585686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:19:56.588547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:19:56.615653 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 20 21:19:56.647584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:19:56.651302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:19:56.733406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:19:56.738149 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:19:56.767263 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 20 21:19:56.794359 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:19:56.794574 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:19:56.794594 kernel: GPT:9289727 != 19775487 Mar 20 21:19:56.794621 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:19:56.794638 kernel: GPT:9289727 != 19775487 Mar 20 21:19:56.794653 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:19:56.794679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:19:56.768211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:19:56.770536 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:19:56.775693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:19:56.777215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:19:56.780577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:19:56.805975 kernel: libata version 3.00 loaded. Mar 20 21:19:56.806005 kernel: cryptd: max_cpu_qlen set to 1000 Mar 20 21:19:56.806818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:19:56.820016 kernel: ahci 0000:00:1f.2: version 3.0 Mar 20 21:19:56.861894 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 20 21:19:56.861914 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 20 21:19:56.862110 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 20 21:19:56.862263 kernel: AVX2 version of gcm_enc/dec engaged. Mar 20 21:19:56.862275 kernel: AES CTR mode by8 optimization enabled Mar 20 21:19:56.862287 kernel: scsi host0: ahci Mar 20 21:19:56.862473 kernel: scsi host1: ahci Mar 20 21:19:56.862635 kernel: scsi host2: ahci Mar 20 21:19:56.862830 kernel: scsi host3: ahci Mar 20 21:19:56.864259 kernel: scsi host4: ahci Mar 20 21:19:56.864432 kernel: BTRFS: device fsid 48a514e8-9ecc-46c2-935b-caca347f921e devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) Mar 20 21:19:56.864446 kernel: scsi host5: ahci Mar 20 21:19:56.864608 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 20 21:19:56.864621 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 20 21:19:56.864633 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (463) Mar 20 21:19:56.864651 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 20 21:19:56.864665 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 20 21:19:56.864686 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 20 21:19:56.864698 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 20 21:19:56.827179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:19:56.827358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:19:56.829290 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:19:56.830748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:19:56.830933 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:19:56.832972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:19:56.838153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:19:56.880776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:19:56.890069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:19:56.898682 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:19:56.898968 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:19:56.909903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:19:56.912352 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:19:56.912610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:19:56.912664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:19:56.916724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:19:56.918202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:19:56.935878 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:19:56.948113 disk-uuid[556]: Primary Header is updated. Mar 20 21:19:56.948113 disk-uuid[556]: Secondary Entries is updated. Mar 20 21:19:56.948113 disk-uuid[556]: Secondary Header is updated. Mar 20 21:19:56.954063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:19:56.948227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:19:56.950069 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:19:56.977721 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:19:57.167343 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 20 21:19:57.167433 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 20 21:19:57.167449 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 20 21:19:57.168995 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 20 21:19:57.169098 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 20 21:19:57.169999 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 20 21:19:57.171204 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 20 21:19:57.171222 kernel: ata3.00: applying bridge limits Mar 20 21:19:57.172224 kernel: ata3.00: configured for UDMA/100 Mar 20 21:19:57.172989 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 20 21:19:57.229574 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 20 21:19:57.241799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 20 21:19:57.241824 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 20 21:19:57.971189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:19:57.972093 disk-uuid[560]: The operation has completed successfully. Mar 20 21:19:58.005723 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:19:58.005862 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:19:58.039059 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:19:58.058334 sh[596]: Success Mar 20 21:19:58.070982 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 20 21:19:58.107464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:19:58.109976 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:19:58.127219 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:19:58.135684 kernel: BTRFS info (device dm-0): first mount of filesystem 48a514e8-9ecc-46c2-935b-caca347f921e Mar 20 21:19:58.135715 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:19:58.135727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:19:58.136698 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:19:58.137431 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:19:58.142320 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:19:58.143873 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:19:58.144836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:19:58.147548 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:19:58.173986 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:19:58.174026 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:19:58.175434 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:19:58.177976 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:19:58.183000 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:19:58.265769 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:19:58.270085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:19:58.291153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:19:58.294075 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:19:58.321259 systemd-networkd[772]: lo: Link UP Mar 20 21:19:58.321269 systemd-networkd[772]: lo: Gained carrier Mar 20 21:19:58.324754 systemd-networkd[772]: Enumeration completed Mar 20 21:19:58.325691 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:19:58.325798 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:19:58.325803 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:19:58.326623 systemd-networkd[772]: eth0: Link UP Mar 20 21:19:58.326627 systemd-networkd[772]: eth0: Gained carrier Mar 20 21:19:58.326644 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:19:58.330831 systemd[1]: Reached target network.target - Network. Mar 20 21:19:58.343113 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:19:58.419566 ignition[775]: Ignition 2.20.0 Mar 20 21:19:58.419579 ignition[775]: Stage: fetch-offline Mar 20 21:19:58.419643 ignition[775]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:19:58.419656 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:19:58.419783 ignition[775]: parsed url from cmdline: "" Mar 20 21:19:58.419788 ignition[775]: no config URL provided Mar 20 21:19:58.419794 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:19:58.419805 ignition[775]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:19:58.419837 ignition[775]: op(1): [started] loading QEMU firmware config module Mar 20 21:19:58.419843 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:19:58.444389 ignition[775]: op(1): [finished] loading QEMU firmware config module Mar 20 21:19:58.482639 ignition[775]: parsing config with SHA512: 6ad4b041c0e6d5968cba2cedb3f9dd908c9f9e85e63433be7cec4e69e4969047d2c81573d85ddd23ff19ae4b10ba85255f0be6e146f02a32e44c712283ebd5d3 Mar 20 21:19:58.491249 unknown[775]: fetched base config from "system" Mar 20 21:19:58.491264 unknown[775]: fetched user config from "qemu" Mar 20 21:19:58.491704 ignition[775]: fetch-offline: fetch-offline passed Mar 20 21:19:58.491823 ignition[775]: Ignition finished successfully Mar 20 21:19:58.494226 systemd-resolved[241]: Detected conflict on linux IN A 10.0.0.14 Mar 20 21:19:58.494991 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:19:58.495011 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Mar 20 21:19:58.495606 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:19:58.496464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:19:58.548019 ignition[787]: Ignition 2.20.0 Mar 20 21:19:58.548031 ignition[787]: Stage: kargs Mar 20 21:19:58.548206 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:19:58.548218 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:19:58.549207 ignition[787]: kargs: kargs passed Mar 20 21:19:58.549257 ignition[787]: Ignition finished successfully Mar 20 21:19:58.555922 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:19:58.558116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:19:58.588062 ignition[795]: Ignition 2.20.0 Mar 20 21:19:58.588074 ignition[795]: Stage: disks Mar 20 21:19:58.588246 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:19:58.588259 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:19:58.589090 ignition[795]: disks: disks passed Mar 20 21:19:58.589140 ignition[795]: Ignition finished successfully Mar 20 21:19:58.627685 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:19:58.629083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:19:58.630778 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:19:58.631999 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:19:58.633989 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:19:58.636145 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:19:58.638990 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:19:58.664738 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:19:58.671434 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:19:58.676006 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:19:58.795973 kernel: EXT4-fs (vda9): mounted filesystem 79cdbe74-6884-4c57-b04d-c9a431509f16 r/w with ordered data mode. Quota mode: none. Mar 20 21:19:58.796873 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:19:58.798605 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:19:58.801215 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:19:58.802707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:19:58.804288 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:19:58.804339 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:19:58.804368 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:19:58.824272 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:19:58.827987 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (813) Mar 20 21:19:58.828409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:19:58.833337 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:19:58.833369 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:19:58.833385 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:19:58.834971 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:19:58.845707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:19:59.020851 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:19:59.026079 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:19:59.031451 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:19:59.035760 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:19:59.133337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:19:59.138394 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:19:59.141748 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:19:59.164874 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:19:59.166185 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:19:59.201288 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:19:59.218525 ignition[928]: INFO : Ignition 2.20.0 Mar 20 21:19:59.218525 ignition[928]: INFO : Stage: mount Mar 20 21:19:59.220252 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:19:59.220252 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:19:59.220252 ignition[928]: INFO : mount: mount passed Mar 20 21:19:59.220252 ignition[928]: INFO : Ignition finished successfully Mar 20 21:19:59.225862 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:19:59.229058 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:19:59.252716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:19:59.275881 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (939) Mar 20 21:19:59.275914 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:19:59.275927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:19:59.277354 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:19:59.280234 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:19:59.281476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:19:59.315975 ignition[956]: INFO : Ignition 2.20.0 Mar 20 21:19:59.315975 ignition[956]: INFO : Stage: files Mar 20 21:19:59.318062 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:19:59.318062 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:19:59.318062 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:19:59.321650 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:19:59.321650 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:19:59.325624 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:19:59.327178 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:19:59.329041 unknown[956]: wrote ssh authorized keys file for user: core Mar 20 21:19:59.330299 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:19:59.332007 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 21:19:59.334113 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 20 21:19:59.371439 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:19:59.565847 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 21:19:59.565847 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:19:59.570171 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:19:59.570171 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:19:59.574284 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 20 21:20:00.096230 systemd-networkd[772]: eth0: Gained IPv6LL Mar 20 21:20:00.140552 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 20 21:20:01.657693 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:20:01.657693 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 20 21:20:01.661662 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:20:01.680524 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:20:01.684960 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:20:01.686593 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:20:01.686593 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:20:01.686593 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:20:01.686593 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:20:01.686593 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:20:01.686593 ignition[956]: INFO : files: files passed Mar 20 21:20:01.686593 ignition[956]: INFO : Ignition finished successfully Mar 20 21:20:01.688260 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:20:01.691938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:20:01.694038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:20:01.708865 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:20:01.709005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:20:01.713353 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:20:01.714947 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:20:01.714947 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:20:01.719468 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:20:01.717242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:20:01.719733 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:20:01.722852 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:20:01.770707 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:20:01.770860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:20:01.773133 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:20:01.775204 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:20:01.777268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:20:01.778084 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:20:01.804055 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:20:01.805689 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:20:01.827062 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:20:01.828337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:20:01.830547 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:20:01.832530 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:20:01.832656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:20:01.834832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:20:01.836542 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:20:01.838546 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:20:01.840576 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:20:01.842565 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:20:01.844711 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:20:01.846804 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:20:01.849076 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:20:01.851105 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:20:01.853298 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:20:01.855063 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:20:01.855182 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:20:01.857298 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:20:01.858883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:20:01.860972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:20:01.861079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:20:01.863206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:20:01.863323 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:20:01.865542 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:20:01.865668 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:20:01.867687 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:20:01.869415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:20:01.874023 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:20:01.876155 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:20:01.877798 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:20:01.879828 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:20:01.879940 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:20:01.882278 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:20:01.882372 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:20:01.884134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:20:01.884262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:20:01.886196 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:20:01.886307 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:20:01.888787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:20:01.889716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:20:01.889835 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:20:01.892466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:20:01.893444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:20:01.893581 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:20:01.895748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:20:01.895858 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:20:01.902983 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:20:01.903100 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:20:01.912082 ignition[1012]: INFO : Ignition 2.20.0 Mar 20 21:20:01.912082 ignition[1012]: INFO : Stage: umount Mar 20 21:20:01.913791 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:20:01.913791 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:20:01.913791 ignition[1012]: INFO : umount: umount passed Mar 20 21:20:01.913791 ignition[1012]: INFO : Ignition finished successfully Mar 20 21:20:01.915232 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:20:01.915357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:20:01.917205 systemd[1]: Stopped target network.target - Network. Mar 20 21:20:01.918593 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:20:01.918654 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:20:01.920655 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:20:01.920716 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:20:01.922721 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:20:01.922773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:20:01.924743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:20:01.924793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:20:01.926733 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:20:01.928700 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:20:01.931806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:20:01.936759 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:20:01.936902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:20:01.940115 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:20:01.940366 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:20:01.940491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:20:01.944727 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:20:01.945529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:20:01.945609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:20:01.947590 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:20:01.948648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:20:01.948704 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:20:01.951074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:20:01.951126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:20:01.954378 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:20:01.954444 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:20:01.956811 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:20:01.956866 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:20:01.959308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:20:01.963188 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:20:01.963260 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:20:01.983274 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:20:01.983416 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:20:01.985607 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:20:01.985822 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:20:01.988090 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:20:01.988161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:20:01.989488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:20:01.989529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:20:01.991750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:20:01.991802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:20:01.993935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:20:01.993999 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:20:02.013314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:20:02.013368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:20:02.016502 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:20:02.018350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:20:02.018411 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:20:02.021473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:20:02.021526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:20:02.024608 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 21:20:02.024678 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:20:02.036696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:20:02.036850 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:20:02.055941 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:20:02.056088 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:20:02.058066 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:20:02.059844 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:20:02.059972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:20:02.063163 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:20:02.087591 systemd[1]: Switching root. Mar 20 21:20:02.119991 systemd-journald[192]: Journal stopped Mar 20 21:20:03.295754 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 20 21:20:03.295828 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:20:03.295849 kernel: SELinux: policy capability open_perms=1 Mar 20 21:20:03.295861 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:20:03.295872 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:20:03.295884 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:20:03.295896 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:20:03.295914 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:20:03.295926 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:20:03.295992 kernel: audit: type=1403 audit(1742505602.443:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:20:03.296012 systemd[1]: Successfully loaded SELinux policy in 40.473ms. Mar 20 21:20:03.296034 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.536ms. Mar 20 21:20:03.296053 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:20:03.296066 systemd[1]: Detected virtualization kvm. Mar 20 21:20:03.296079 systemd[1]: Detected architecture x86-64. Mar 20 21:20:03.296091 systemd[1]: Detected first boot. Mar 20 21:20:03.296111 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:20:03.296124 zram_generator::config[1058]: No configuration found. Mar 20 21:20:03.296138 kernel: Guest personality initialized and is inactive Mar 20 21:20:03.296149 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 20 21:20:03.296161 kernel: Initialized host personality Mar 20 21:20:03.296173 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:20:03.296185 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:20:03.296198 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:20:03.296211 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:20:03.296229 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:20:03.296241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:20:03.296255 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:20:03.296268 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:20:03.296281 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:20:03.296293 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:20:03.296306 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:20:03.296319 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:20:03.296337 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:20:03.296350 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:20:03.296363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:20:03.296376 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:20:03.296388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:20:03.296401 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:20:03.296421 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:20:03.296435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:20:03.296453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 20 21:20:03.296466 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:20:03.296479 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:20:03.296491 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:20:03.296504 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:20:03.296516 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:20:03.296538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:20:03.296551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:20:03.296563 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:20:03.296586 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:20:03.296599 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:20:03.296611 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:20:03.296629 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:20:03.296641 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:20:03.296654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:20:03.296667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:20:03.296679 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:20:03.296692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:20:03.296710 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:20:03.296729 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:20:03.296742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:03.296755 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:20:03.296767 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:20:03.296780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:20:03.296793 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:20:03.296806 systemd[1]: Reached target machines.target - Containers. Mar 20 21:20:03.296824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:20:03.296836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:20:03.296849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:20:03.296861 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:20:03.296874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:20:03.296886 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:20:03.296899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:20:03.296911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:20:03.296923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:20:03.296942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:20:03.296987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:20:03.297000 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:20:03.297013 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:20:03.297025 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:20:03.297047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:20:03.297060 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:20:03.297072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:20:03.297091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:20:03.297103 kernel: fuse: init (API version 7.39) Mar 20 21:20:03.297115 kernel: loop: module loaded Mar 20 21:20:03.297127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:20:03.297140 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:20:03.297152 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:20:03.297170 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:20:03.297183 systemd[1]: Stopped verity-setup.service. Mar 20 21:20:03.297196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:03.297208 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:20:03.297221 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:20:03.297233 kernel: ACPI: bus type drm_connector registered Mar 20 21:20:03.297245 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:20:03.297257 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:20:03.297275 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:20:03.297292 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:20:03.297323 systemd-journald[1133]: Collecting audit messages is disabled. Mar 20 21:20:03.297347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:20:03.297373 systemd-journald[1133]: Journal started Mar 20 21:20:03.297396 systemd-journald[1133]: Runtime Journal (/run/log/journal/ee18de08850845a39eabb6967568c6a5) is 6M, max 48.2M, 42.2M free. Mar 20 21:20:03.017211 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:20:03.029037 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:20:03.029512 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:20:03.299125 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:20:03.300293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:20:03.301843 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:20:03.302087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:20:03.303595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:20:03.303815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:20:03.305358 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:20:03.305604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:20:03.306991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:20:03.307209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:20:03.308707 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:20:03.308921 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:20:03.310310 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:20:03.310535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:20:03.311938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:20:03.313399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:20:03.315004 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:20:03.316594 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:20:03.331965 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:20:03.334864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:20:03.337448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:20:03.338692 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:20:03.338813 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:20:03.341206 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:20:03.356756 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:20:03.359451 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:20:03.361217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:20:03.363148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:20:03.370878 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:20:03.373148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:20:03.374310 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:20:03.375751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:20:03.379911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:20:03.388875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:20:03.391120 systemd-journald[1133]: Time spent on flushing to /var/log/journal/ee18de08850845a39eabb6967568c6a5 is 20.771ms for 1057 entries. Mar 20 21:20:03.391120 systemd-journald[1133]: System Journal (/var/log/journal/ee18de08850845a39eabb6967568c6a5) is 8M, max 195.6M, 187.6M free. Mar 20 21:20:03.454761 systemd-journald[1133]: Received client request to flush runtime journal. Mar 20 21:20:03.454823 kernel: loop0: detected capacity change from 0 to 109808 Mar 20 21:20:03.454860 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:20:03.398767 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:20:03.402917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:20:03.407406 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:20:03.409114 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:20:03.425094 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:20:03.427186 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:20:03.432702 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:20:03.437484 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:20:03.441124 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:20:03.452963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:20:03.456326 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:20:03.463713 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 21:20:03.475715 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:20:03.479785 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:20:03.483783 kernel: loop1: detected capacity change from 0 to 151640 Mar 20 21:20:03.483299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:20:03.517776 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 20 21:20:03.517794 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 20 21:20:03.522987 kernel: loop2: detected capacity change from 0 to 205544 Mar 20 21:20:03.527793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:20:03.562990 kernel: loop3: detected capacity change from 0 to 109808 Mar 20 21:20:03.576145 kernel: loop4: detected capacity change from 0 to 151640 Mar 20 21:20:03.589202 kernel: loop5: detected capacity change from 0 to 205544 Mar 20 21:20:03.598741 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:20:03.599556 (sd-merge)[1202]: Merged extensions into '/usr'. Mar 20 21:20:03.605111 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:20:03.605132 systemd[1]: Reloading... Mar 20 21:20:03.680996 zram_generator::config[1232]: No configuration found. Mar 20 21:20:03.742632 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:20:03.810690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:20:03.875090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:20:03.875834 systemd[1]: Reloading finished in 270 ms. Mar 20 21:20:03.896481 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:20:03.898095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:20:03.912334 systemd[1]: Starting ensure-sysext.service... Mar 20 21:20:03.915469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:20:03.945243 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:20:03.945648 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:20:03.946705 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:20:03.947010 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 20 21:20:03.947093 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 20 21:20:03.951384 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:20:03.951399 systemd-tmpfiles[1268]: Skipping /boot Mar 20 21:20:03.952153 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:20:03.952171 systemd[1]: Reloading... Mar 20 21:20:03.967930 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:20:03.967948 systemd-tmpfiles[1268]: Skipping /boot Mar 20 21:20:04.014985 zram_generator::config[1300]: No configuration found. Mar 20 21:20:04.120757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:20:04.188679 systemd[1]: Reloading finished in 236 ms. Mar 20 21:20:04.203808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:20:04.228497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:20:04.238410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:20:04.241071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:20:04.243697 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:20:04.251413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:20:04.255877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:20:04.260164 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:20:04.266037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:04.266361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:20:04.273872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:20:04.280139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:20:04.283351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:20:04.284621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:20:04.284784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:20:04.288434 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:20:04.289540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:04.294681 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:20:04.295317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:20:04.297319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:20:04.298798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:20:04.300795 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:20:04.309315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:20:04.309589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:20:04.315477 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Mar 20 21:20:04.316400 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:20:04.322683 augenrules[1370]: No rules Mar 20 21:20:04.323839 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:20:04.324220 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:20:04.329782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:04.329992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:20:04.332065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:20:04.337067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:20:04.345022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:20:04.355034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:20:04.356286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:20:04.356334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:20:04.357705 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:20:04.358744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:20:04.359141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:20:04.359729 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:20:04.369320 systemd[1]: Finished ensure-sysext.service. Mar 20 21:20:04.372545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:20:04.374595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:20:04.374836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:20:04.376546 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:20:04.376779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:20:04.378265 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:20:04.378680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:20:04.380406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:20:04.380649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:20:04.387339 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:20:04.403358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:20:04.404560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:20:04.404644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:20:04.406851 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:20:04.408719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:20:04.417125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 20 21:20:04.456002 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) Mar 20 21:20:04.489064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 20 21:20:04.490449 systemd-resolved[1339]: Positive Trust Anchors: Mar 20 21:20:04.490785 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:20:04.490862 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:20:04.494910 systemd-resolved[1339]: Defaulting to hostname 'linux'. Mar 20 21:20:04.495012 kernel: ACPI: button: Power Button [PWRF] Mar 20 21:20:04.496858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:20:04.498340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:20:04.528981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 20 21:20:04.533045 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 20 21:20:04.541393 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 20 21:20:04.541600 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 20 21:20:04.541796 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 20 21:20:04.537135 systemd-networkd[1412]: lo: Link UP Mar 20 21:20:04.537140 systemd-networkd[1412]: lo: Gained carrier Mar 20 21:20:04.539713 systemd-networkd[1412]: Enumeration completed Mar 20 21:20:04.539750 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:20:04.541121 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:20:04.542266 systemd[1]: Reached target network.target - Network. Mar 20 21:20:04.543272 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:20:04.543452 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:20:04.543458 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:20:04.544340 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:20:04.544377 systemd-networkd[1412]: eth0: Link UP Mar 20 21:20:04.544382 systemd-networkd[1412]: eth0: Gained carrier Mar 20 21:20:04.544393 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:20:04.546018 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:20:04.552063 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:20:04.556639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:20:04.564852 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:20:04.568999 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:20:04.569732 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Mar 20 21:20:05.495805 systemd-resolved[1339]: Clock change detected. Flushing caches. Mar 20 21:20:05.496078 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:20:05.496159 systemd-timesyncd[1413]: Initial clock synchronization to Thu 2025-03-20 21:20:05.495729 UTC. Mar 20 21:20:05.503070 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:20:05.518986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:20:05.520855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:20:05.525621 kernel: mousedev: PS/2 mouse device common for all mice Mar 20 21:20:05.542348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:20:05.542628 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:20:05.551592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:20:05.602639 kernel: kvm_amd: TSC scaling supported Mar 20 21:20:05.602732 kernel: kvm_amd: Nested Virtualization enabled Mar 20 21:20:05.602747 kernel: kvm_amd: Nested Paging enabled Mar 20 21:20:05.602759 kernel: kvm_amd: LBR virtualization supported Mar 20 21:20:05.603714 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 20 21:20:05.603868 kernel: kvm_amd: Virtual GIF supported Mar 20 21:20:05.624631 kernel: EDAC MC: Ver: 3.0.0 Mar 20 21:20:05.654458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:20:05.667897 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:20:05.670795 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:20:05.695286 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:20:05.727850 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:20:05.729444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:20:05.730591 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:20:05.731796 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:20:05.733093 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:20:05.734556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:20:05.735778 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:20:05.737250 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:20:05.738511 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:20:05.738540 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:20:05.739468 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:20:05.741349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:20:05.744056 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:20:05.747789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:20:05.749221 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:20:05.750496 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:20:05.754238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:20:05.755744 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:20:05.758128 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:20:05.759795 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:20:05.760983 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:20:05.761973 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:20:05.762990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:20:05.763018 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:20:05.771483 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:20:05.773578 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:20:05.776614 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:20:05.775591 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:20:05.779799 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:20:05.780922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:20:05.782050 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:20:05.784707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:20:05.787752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:20:05.793110 jq[1454]: false Mar 20 21:20:05.793245 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:20:05.801170 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:20:05.803666 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:20:05.804146 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:20:05.806145 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:20:05.808844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:20:05.810073 dbus-daemon[1453]: [system] SELinux support is enabled Mar 20 21:20:05.817156 extend-filesystems[1455]: Found loop3 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found loop4 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found loop5 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found sr0 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda1 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda2 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda3 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found usr Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda4 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda6 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda7 Mar 20 21:20:05.817156 extend-filesystems[1455]: Found vda9 Mar 20 21:20:05.817156 extend-filesystems[1455]: Checking size of /dev/vda9 Mar 20 21:20:05.817338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:20:05.820482 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:20:05.824877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:20:05.827636 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:20:05.838087 jq[1470]: true Mar 20 21:20:05.828040 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:20:05.828311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:20:05.830718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:20:05.831706 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:20:05.838446 update_engine[1465]: I20250320 21:20:05.837563 1465 main.cc:92] Flatcar Update Engine starting Mar 20 21:20:05.844338 update_engine[1465]: I20250320 21:20:05.839588 1465 update_check_scheduler.cc:74] Next update check in 7m56s Mar 20 21:20:05.844366 extend-filesystems[1455]: Resized partition /dev/vda9 Mar 20 21:20:05.849209 jq[1476]: true Mar 20 21:20:05.851516 extend-filesystems[1484]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:20:05.856637 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:20:05.861651 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1403) Mar 20 21:20:05.862183 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:20:05.873831 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:20:05.873864 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:20:05.875207 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:20:05.875227 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:20:05.876803 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:20:05.886724 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:20:05.885490 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:20:05.889472 tar[1475]: linux-amd64/helm Mar 20 21:20:05.914172 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:20:05.914172 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:20:05.914172 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:20:05.924363 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Mar 20 21:20:05.922681 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:20:05.922956 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:20:05.929050 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Mar 20 21:20:05.929079 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 20 21:20:05.930167 systemd-logind[1460]: New seat seat0. Mar 20 21:20:05.932632 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:20:05.944625 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:20:05.948668 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:20:05.955394 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:20:05.977673 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:20:06.004844 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:20:06.074010 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:20:06.085686 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:20:06.112798 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:20:06.113367 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:20:06.117971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:20:06.207703 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:20:06.211510 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:20:06.216951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 20 21:20:06.219632 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:20:06.265705 containerd[1483]: time="2025-03-20T21:20:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:20:06.266959 containerd[1483]: time="2025-03-20T21:20:06.266918708Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:20:06.296776 containerd[1483]: time="2025-03-20T21:20:06.296703537Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.934µs" Mar 20 21:20:06.296776 containerd[1483]: time="2025-03-20T21:20:06.296747008Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:20:06.296776 containerd[1483]: time="2025-03-20T21:20:06.296767968Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:20:06.297034 containerd[1483]: time="2025-03-20T21:20:06.296993711Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:20:06.297034 containerd[1483]: time="2025-03-20T21:20:06.297023267Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:20:06.297076 containerd[1483]: time="2025-03-20T21:20:06.297052151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297168 containerd[1483]: time="2025-03-20T21:20:06.297136249Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297168 containerd[1483]: time="2025-03-20T21:20:06.297155575Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297509 containerd[1483]: time="2025-03-20T21:20:06.297476557Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297509 containerd[1483]: time="2025-03-20T21:20:06.297496635Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297562 containerd[1483]: time="2025-03-20T21:20:06.297529246Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297562 containerd[1483]: time="2025-03-20T21:20:06.297539946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297705 containerd[1483]: time="2025-03-20T21:20:06.297675049Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:20:06.297966 containerd[1483]: time="2025-03-20T21:20:06.297933554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:20:06.298013 containerd[1483]: time="2025-03-20T21:20:06.297970874Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:20:06.298013 containerd[1483]: time="2025-03-20T21:20:06.297982346Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:20:06.298057 containerd[1483]: time="2025-03-20T21:20:06.298030366Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:20:06.298284 containerd[1483]: time="2025-03-20T21:20:06.298253495Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:20:06.298359 containerd[1483]: time="2025-03-20T21:20:06.298333845Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:20:06.304921 containerd[1483]: time="2025-03-20T21:20:06.304854609Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.304932385Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.304949096Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.304962802Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.304975776Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.304986386Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.305000032Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:20:06.305064 containerd[1483]: time="2025-03-20T21:20:06.305047701Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:20:06.305213 containerd[1483]: time="2025-03-20T21:20:06.305071896Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:20:06.305213 containerd[1483]: time="2025-03-20T21:20:06.305086343Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:20:06.305213 containerd[1483]: time="2025-03-20T21:20:06.305096122Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:20:06.305213 containerd[1483]: time="2025-03-20T21:20:06.305112843Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:20:06.305331 containerd[1483]: time="2025-03-20T21:20:06.305304222Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:20:06.305356 containerd[1483]: time="2025-03-20T21:20:06.305332816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:20:06.305395 containerd[1483]: time="2025-03-20T21:20:06.305351952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:20:06.305395 containerd[1483]: time="2025-03-20T21:20:06.305367952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:20:06.305395 containerd[1483]: time="2025-03-20T21:20:06.305386386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:20:06.305456 containerd[1483]: time="2025-03-20T21:20:06.305413808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:20:06.305456 containerd[1483]: time="2025-03-20T21:20:06.305433795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:20:06.305456 containerd[1483]: time="2025-03-20T21:20:06.305446008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:20:06.305528 containerd[1483]: time="2025-03-20T21:20:06.305471556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:20:06.305528 containerd[1483]: time="2025-03-20T21:20:06.305486344Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:20:06.305528 containerd[1483]: time="2025-03-20T21:20:06.305497345Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:20:06.305592 containerd[1483]: time="2025-03-20T21:20:06.305578096Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:20:06.305639 containerd[1483]: time="2025-03-20T21:20:06.305592433Z" level=info msg="Start snapshots syncer" Mar 20 21:20:06.305662 containerd[1483]: time="2025-03-20T21:20:06.305638539Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:20:06.305949 containerd[1483]: time="2025-03-20T21:20:06.305905921Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:20:06.306111 containerd[1483]: time="2025-03-20T21:20:06.305968629Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:20:06.306111 containerd[1483]: time="2025-03-20T21:20:06.306055702Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:20:06.306207 containerd[1483]: time="2025-03-20T21:20:06.306179835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:20:06.306239 containerd[1483]: time="2025-03-20T21:20:06.306206765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:20:06.306239 containerd[1483]: time="2025-03-20T21:20:06.306220161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:20:06.306239 containerd[1483]: time="2025-03-20T21:20:06.306232113Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:20:06.306295 containerd[1483]: time="2025-03-20T21:20:06.306273190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:20:06.306295 containerd[1483]: time="2025-03-20T21:20:06.306287096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:20:06.306337 containerd[1483]: time="2025-03-20T21:20:06.306298117Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:20:06.306337 containerd[1483]: time="2025-03-20T21:20:06.306321180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:20:06.306390 containerd[1483]: time="2025-03-20T21:20:06.306344324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:20:06.306390 containerd[1483]: time="2025-03-20T21:20:06.306355935Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:20:06.307646 containerd[1483]: time="2025-03-20T21:20:06.307619446Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:20:06.307690 containerd[1483]: time="2025-03-20T21:20:06.307649322Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:20:06.307690 containerd[1483]: time="2025-03-20T21:20:06.307660463Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:20:06.307690 containerd[1483]: time="2025-03-20T21:20:06.307670592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:20:06.307690 containerd[1483]: time="2025-03-20T21:20:06.307679078Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:20:06.307690 containerd[1483]: time="2025-03-20T21:20:06.307688766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307699476Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307719413Z" level=info msg="runtime interface created" Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307727058Z" level=info msg="created NRI interface" Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307736345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307747396Z" level=info msg="Connect containerd service" Mar 20 21:20:06.307806 containerd[1483]: time="2025-03-20T21:20:06.307771321Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:20:06.315615 containerd[1483]: time="2025-03-20T21:20:06.315189398Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:20:06.520238 containerd[1483]: time="2025-03-20T21:20:06.520087190Z" level=info msg="Start subscribing containerd event" Mar 20 21:20:06.520238 containerd[1483]: time="2025-03-20T21:20:06.520163303Z" level=info msg="Start recovering state" Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520291644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520300270Z" level=info msg="Start event monitor" Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520342289Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520349462Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520350394Z" level=info msg="Start streaming server" Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520398094Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520406590Z" level=info msg="runtime interface starting up..." Mar 20 21:20:06.520412 containerd[1483]: time="2025-03-20T21:20:06.520413533Z" level=info msg="starting plugins..." Mar 20 21:20:06.520564 containerd[1483]: time="2025-03-20T21:20:06.520434883Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:20:06.520743 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:20:06.522008 containerd[1483]: time="2025-03-20T21:20:06.521834008Z" level=info msg="containerd successfully booted in 0.256773s" Mar 20 21:20:06.580458 tar[1475]: linux-amd64/LICENSE Mar 20 21:20:06.580576 tar[1475]: linux-amd64/README.md Mar 20 21:20:06.602034 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:20:06.904856 systemd-networkd[1412]: eth0: Gained IPv6LL Mar 20 21:20:06.908138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:20:06.909928 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:20:06.913068 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:20:06.915647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:06.924228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:20:06.948641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:20:06.951324 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:20:06.951747 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:20:06.954320 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:20:08.066851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:08.068765 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:20:08.070139 systemd[1]: Startup finished in 1.307s (kernel) + 6.678s (initrd) + 4.743s (userspace) = 12.729s. Mar 20 21:20:08.077962 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:20:08.661802 kubelet[1581]: E0320 21:20:08.661715 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:20:08.665990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:20:08.666204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:20:08.666619 systemd[1]: kubelet.service: Consumed 1.617s CPU time, 236.1M memory peak. Mar 20 21:20:09.537144 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:20:09.538483 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:59940.service - OpenSSH per-connection server daemon (10.0.0.1:59940). Mar 20 21:20:09.606429 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 59940 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:09.608326 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:09.620562 systemd-logind[1460]: New session 1 of user core. Mar 20 21:20:09.622222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:20:09.623649 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:20:09.679395 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:20:09.682281 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:20:09.706891 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:20:09.709328 systemd-logind[1460]: New session c1 of user core. Mar 20 21:20:09.871577 systemd[1599]: Queued start job for default target default.target. Mar 20 21:20:09.883375 systemd[1599]: Created slice app.slice - User Application Slice. Mar 20 21:20:09.883423 systemd[1599]: Reached target paths.target - Paths. Mar 20 21:20:09.883469 systemd[1599]: Reached target timers.target - Timers. Mar 20 21:20:09.887328 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:20:09.901723 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:20:09.901907 systemd[1599]: Reached target sockets.target - Sockets. Mar 20 21:20:09.901965 systemd[1599]: Reached target basic.target - Basic System. Mar 20 21:20:09.902028 systemd[1599]: Reached target default.target - Main User Target. Mar 20 21:20:09.902086 systemd[1599]: Startup finished in 184ms. Mar 20 21:20:09.902580 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:20:09.904625 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:20:09.964349 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:59944.service - OpenSSH per-connection server daemon (10.0.0.1:59944). Mar 20 21:20:10.017717 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 59944 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.019146 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.023388 systemd-logind[1460]: New session 2 of user core. Mar 20 21:20:10.036744 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:20:10.093384 sshd[1612]: Connection closed by 10.0.0.1 port 59944 Mar 20 21:20:10.093833 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:10.118115 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:59944.service: Deactivated successfully. Mar 20 21:20:10.119950 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:20:10.121408 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:20:10.122679 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:59946.service - OpenSSH per-connection server daemon (10.0.0.1:59946). Mar 20 21:20:10.123872 systemd-logind[1460]: Removed session 2. Mar 20 21:20:10.174594 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 59946 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.176292 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.180660 systemd-logind[1460]: New session 3 of user core. Mar 20 21:20:10.192729 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:20:10.241485 sshd[1620]: Connection closed by 10.0.0.1 port 59946 Mar 20 21:20:10.241786 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:10.267076 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:59946.service: Deactivated successfully. Mar 20 21:20:10.268876 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:20:10.270294 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:20:10.271546 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Mar 20 21:20:10.272380 systemd-logind[1460]: Removed session 3. Mar 20 21:20:10.322112 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.323809 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.328320 systemd-logind[1460]: New session 4 of user core. Mar 20 21:20:10.337753 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:20:10.391502 sshd[1628]: Connection closed by 10.0.0.1 port 59954 Mar 20 21:20:10.391761 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:10.410150 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:59954.service: Deactivated successfully. Mar 20 21:20:10.411978 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:20:10.413416 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:20:10.414824 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:59964.service - OpenSSH per-connection server daemon (10.0.0.1:59964). Mar 20 21:20:10.415548 systemd-logind[1460]: Removed session 4. Mar 20 21:20:10.467733 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 59964 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.469124 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.473426 systemd-logind[1460]: New session 5 of user core. Mar 20 21:20:10.482722 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:20:10.540657 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:20:10.540989 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:20:10.560667 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 20 21:20:10.562236 sshd[1636]: Connection closed by 10.0.0.1 port 59964 Mar 20 21:20:10.562647 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:10.584267 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:59964.service: Deactivated successfully. Mar 20 21:20:10.586113 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:20:10.587531 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:20:10.589031 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:59970.service - OpenSSH per-connection server daemon (10.0.0.1:59970). Mar 20 21:20:10.589851 systemd-logind[1460]: Removed session 5. Mar 20 21:20:10.642822 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 59970 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.644716 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.649983 systemd-logind[1460]: New session 6 of user core. Mar 20 21:20:10.660810 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:20:10.717386 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:20:10.717805 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:20:10.722120 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 20 21:20:10.729466 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:20:10.729839 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:20:10.740709 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:20:10.787222 augenrules[1669]: No rules Mar 20 21:20:10.789400 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:20:10.789724 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:20:10.791000 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 20 21:20:10.792685 sshd[1645]: Connection closed by 10.0.0.1 port 59970 Mar 20 21:20:10.793085 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:10.806619 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:59970.service: Deactivated successfully. Mar 20 21:20:10.808719 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:20:10.810417 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:20:10.811973 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:59978.service - OpenSSH per-connection server daemon (10.0.0.1:59978). Mar 20 21:20:10.812754 systemd-logind[1460]: Removed session 6. Mar 20 21:20:10.882126 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:20:10.883957 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:20:10.888993 systemd-logind[1460]: New session 7 of user core. Mar 20 21:20:10.898756 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:20:10.954355 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:20:10.954858 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:20:11.557906 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:20:11.572015 (dockerd)[1701]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:20:12.119107 dockerd[1701]: time="2025-03-20T21:20:12.119021140Z" level=info msg="Starting up" Mar 20 21:20:12.120246 dockerd[1701]: time="2025-03-20T21:20:12.120178752Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:20:12.402570 dockerd[1701]: time="2025-03-20T21:20:12.402382336Z" level=info msg="Loading containers: start." Mar 20 21:20:12.687637 kernel: Initializing XFRM netlink socket Mar 20 21:20:12.773664 systemd-networkd[1412]: docker0: Link UP Mar 20 21:20:12.843780 dockerd[1701]: time="2025-03-20T21:20:12.843708776Z" level=info msg="Loading containers: done." Mar 20 21:20:12.859313 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2157005136-merged.mount: Deactivated successfully. Mar 20 21:20:12.862006 dockerd[1701]: time="2025-03-20T21:20:12.861930030Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:20:12.862177 dockerd[1701]: time="2025-03-20T21:20:12.862056988Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:20:12.862232 dockerd[1701]: time="2025-03-20T21:20:12.862183856Z" level=info msg="Daemon has completed initialization" Mar 20 21:20:12.904777 dockerd[1701]: time="2025-03-20T21:20:12.904679131Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:20:12.904922 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:20:14.033678 containerd[1483]: time="2025-03-20T21:20:14.033626847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 20 21:20:14.755247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780026183.mount: Deactivated successfully. Mar 20 21:20:16.229653 containerd[1483]: time="2025-03-20T21:20:16.229553803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:16.230553 containerd[1483]: time="2025-03-20T21:20:16.230473779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 20 21:20:16.231645 containerd[1483]: time="2025-03-20T21:20:16.231609800Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:16.234187 containerd[1483]: time="2025-03-20T21:20:16.234135599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:16.235175 containerd[1483]: time="2025-03-20T21:20:16.235143950Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 2.201468011s" Mar 20 21:20:16.235248 containerd[1483]: time="2025-03-20T21:20:16.235178475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 20 21:20:16.237316 containerd[1483]: time="2025-03-20T21:20:16.237286310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 20 21:20:18.052365 containerd[1483]: time="2025-03-20T21:20:18.052299674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:18.053141 containerd[1483]: time="2025-03-20T21:20:18.053052707Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 20 21:20:18.054234 containerd[1483]: time="2025-03-20T21:20:18.054199929Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:18.056738 containerd[1483]: time="2025-03-20T21:20:18.056706742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:18.057535 containerd[1483]: time="2025-03-20T21:20:18.057503888Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 1.820190297s" Mar 20 21:20:18.057617 containerd[1483]: time="2025-03-20T21:20:18.057536168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 20 21:20:18.058198 containerd[1483]: time="2025-03-20T21:20:18.058048660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 20 21:20:18.916745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:20:18.918717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:19.303543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:19.317017 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:20:19.552576 kubelet[1974]: E0320 21:20:19.552480 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:20:19.559578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:20:19.560038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:20:19.560590 systemd[1]: kubelet.service: Consumed 598ms CPU time, 98.2M memory peak. Mar 20 21:20:20.250417 containerd[1483]: time="2025-03-20T21:20:20.250363799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:20.266171 containerd[1483]: time="2025-03-20T21:20:20.266095593Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 20 21:20:20.284560 containerd[1483]: time="2025-03-20T21:20:20.284527423Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:20.311429 containerd[1483]: time="2025-03-20T21:20:20.311370162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:20.312592 containerd[1483]: time="2025-03-20T21:20:20.312550947Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 2.254469946s" Mar 20 21:20:20.312592 containerd[1483]: time="2025-03-20T21:20:20.312589099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 20 21:20:20.313270 containerd[1483]: time="2025-03-20T21:20:20.313090269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 21:20:21.458816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177631529.mount: Deactivated successfully. Mar 20 21:20:22.167819 containerd[1483]: time="2025-03-20T21:20:22.167727430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:22.168392 containerd[1483]: time="2025-03-20T21:20:22.168345059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 20 21:20:22.169740 containerd[1483]: time="2025-03-20T21:20:22.169693629Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:22.171984 containerd[1483]: time="2025-03-20T21:20:22.171923633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:22.172440 containerd[1483]: time="2025-03-20T21:20:22.172400838Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 1.859282758s" Mar 20 21:20:22.172440 containerd[1483]: time="2025-03-20T21:20:22.172431115Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 20 21:20:22.173201 containerd[1483]: time="2025-03-20T21:20:22.173129295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 21:20:22.955566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740123355.mount: Deactivated successfully. Mar 20 21:20:24.016121 containerd[1483]: time="2025-03-20T21:20:24.016028125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:24.016867 containerd[1483]: time="2025-03-20T21:20:24.016785465Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 20 21:20:24.018045 containerd[1483]: time="2025-03-20T21:20:24.017978894Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:24.020869 containerd[1483]: time="2025-03-20T21:20:24.020801500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:24.022041 containerd[1483]: time="2025-03-20T21:20:24.021987805Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.848793038s" Mar 20 21:20:24.022100 containerd[1483]: time="2025-03-20T21:20:24.022043690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 20 21:20:24.022673 containerd[1483]: time="2025-03-20T21:20:24.022644698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 21:20:24.559082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547909889.mount: Deactivated successfully. Mar 20 21:20:24.565907 containerd[1483]: time="2025-03-20T21:20:24.565837035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:20:24.566729 containerd[1483]: time="2025-03-20T21:20:24.566649490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 20 21:20:24.567786 containerd[1483]: time="2025-03-20T21:20:24.567751236Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:20:24.570238 containerd[1483]: time="2025-03-20T21:20:24.570197836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:20:24.570909 containerd[1483]: time="2025-03-20T21:20:24.570879906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.201505ms" Mar 20 21:20:24.570947 containerd[1483]: time="2025-03-20T21:20:24.570914461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 20 21:20:24.571491 containerd[1483]: time="2025-03-20T21:20:24.571463581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 20 21:20:25.087687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258274361.mount: Deactivated successfully. Mar 20 21:20:26.719392 containerd[1483]: time="2025-03-20T21:20:26.719321716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:26.720219 containerd[1483]: time="2025-03-20T21:20:26.720133660Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 20 21:20:26.721588 containerd[1483]: time="2025-03-20T21:20:26.721545017Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:26.724309 containerd[1483]: time="2025-03-20T21:20:26.724268397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:26.725488 containerd[1483]: time="2025-03-20T21:20:26.725450775Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.15394767s" Mar 20 21:20:26.725533 containerd[1483]: time="2025-03-20T21:20:26.725493806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 20 21:20:29.469747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:29.469929 systemd[1]: kubelet.service: Consumed 598ms CPU time, 98.2M memory peak. Mar 20 21:20:29.472254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:29.499427 systemd[1]: Reload requested from client PID 2125 ('systemctl') (unit session-7.scope)... Mar 20 21:20:29.499456 systemd[1]: Reloading... Mar 20 21:20:29.597638 zram_generator::config[2178]: No configuration found. Mar 20 21:20:29.863159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:20:29.966141 systemd[1]: Reloading finished in 466 ms. Mar 20 21:20:30.030910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:30.034720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:30.035481 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:20:30.035784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:30.035820 systemd[1]: kubelet.service: Consumed 158ms CPU time, 83.6M memory peak. Mar 20 21:20:30.037392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:30.204283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:30.208974 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:20:30.351282 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:20:30.351282 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:20:30.351282 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:20:30.351752 kubelet[2220]: I0320 21:20:30.351334 2220 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:20:30.838202 kubelet[2220]: I0320 21:20:30.838130 2220 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:20:30.838202 kubelet[2220]: I0320 21:20:30.838184 2220 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:20:30.838551 kubelet[2220]: I0320 21:20:30.838525 2220 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:20:30.863300 kubelet[2220]: I0320 21:20:30.863232 2220 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:20:30.866314 kubelet[2220]: E0320 21:20:30.865502 2220 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:30.875328 kubelet[2220]: I0320 21:20:30.875298 2220 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:20:30.881961 kubelet[2220]: I0320 21:20:30.881931 2220 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:20:30.882897 kubelet[2220]: I0320 21:20:30.882871 2220 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:20:30.883084 kubelet[2220]: I0320 21:20:30.883042 2220 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:20:30.883257 kubelet[2220]: I0320 21:20:30.883074 2220 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:20:30.883257 kubelet[2220]: I0320 21:20:30.883257 2220 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:20:30.883472 kubelet[2220]: I0320 21:20:30.883266 2220 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:20:30.883472 kubelet[2220]: I0320 21:20:30.883386 2220 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:20:30.884827 kubelet[2220]: I0320 21:20:30.884789 2220 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:20:30.884827 kubelet[2220]: I0320 21:20:30.884811 2220 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:20:30.884908 kubelet[2220]: I0320 21:20:30.884884 2220 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:20:30.884954 kubelet[2220]: I0320 21:20:30.884928 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:20:30.891680 kubelet[2220]: W0320 21:20:30.891619 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:30.891753 kubelet[2220]: E0320 21:20:30.891686 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:30.892611 kubelet[2220]: W0320 21:20:30.892530 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:30.892674 kubelet[2220]: E0320 21:20:30.892632 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:30.893455 kubelet[2220]: I0320 21:20:30.893435 2220 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:20:30.895333 kubelet[2220]: I0320 21:20:30.895306 2220 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:20:30.895959 kubelet[2220]: W0320 21:20:30.895934 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:20:30.896852 kubelet[2220]: I0320 21:20:30.896830 2220 server.go:1269] "Started kubelet" Mar 20 21:20:30.897051 kubelet[2220]: I0320 21:20:30.896954 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:20:30.897953 kubelet[2220]: I0320 21:20:30.897411 2220 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:20:30.897953 kubelet[2220]: I0320 21:20:30.897484 2220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:20:30.899373 kubelet[2220]: I0320 21:20:30.898424 2220 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:20:30.899741 kubelet[2220]: I0320 21:20:30.899703 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:20:30.899977 kubelet[2220]: I0320 21:20:30.899952 2220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:20:30.900903 kubelet[2220]: I0320 21:20:30.900883 2220 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:20:30.901645 kubelet[2220]: I0320 21:20:30.901614 2220 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:20:30.901711 kubelet[2220]: I0320 21:20:30.901703 2220 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:20:30.902100 kubelet[2220]: W0320 21:20:30.902046 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:30.902100 kubelet[2220]: E0320 21:20:30.902094 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:30.902198 kubelet[2220]: E0320 21:20:30.902175 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:30.904041 kubelet[2220]: E0320 21:20:30.903997 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Mar 20 21:20:30.906580 kubelet[2220]: E0320 21:20:30.903999 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9fa39cb8cd62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:20:30.89680317 +0000 UTC m=+0.683727773,LastTimestamp:2025-03-20 21:20:30.89680317 +0000 UTC m=+0.683727773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:20:30.906832 kubelet[2220]: E0320 21:20:30.906793 2220 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:20:30.910681 kubelet[2220]: I0320 21:20:30.910648 2220 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:20:30.910681 kubelet[2220]: I0320 21:20:30.910671 2220 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:20:30.910842 kubelet[2220]: I0320 21:20:30.910779 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:20:30.925789 kubelet[2220]: I0320 21:20:30.925460 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:20:30.925789 kubelet[2220]: I0320 21:20:30.925511 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:20:30.925789 kubelet[2220]: I0320 21:20:30.925535 2220 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:20:30.926240 kubelet[2220]: I0320 21:20:30.926219 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:20:30.928612 kubelet[2220]: I0320 21:20:30.928548 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:20:30.928669 kubelet[2220]: I0320 21:20:30.928629 2220 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:20:30.928669 kubelet[2220]: I0320 21:20:30.928653 2220 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:20:30.928738 kubelet[2220]: E0320 21:20:30.928709 2220 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:20:30.930001 kubelet[2220]: W0320 21:20:30.929821 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:30.930001 kubelet[2220]: E0320 21:20:30.929885 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:31.002333 kubelet[2220]: E0320 21:20:31.002287 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:31.029768 kubelet[2220]: E0320 21:20:31.029731 2220 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:20:31.103315 kubelet[2220]: E0320 21:20:31.103226 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:31.104797 kubelet[2220]: E0320 21:20:31.104697 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Mar 20 21:20:31.109304 kubelet[2220]: I0320 21:20:31.109271 2220 policy_none.go:49] "None policy: Start" Mar 20 21:20:31.110165 kubelet[2220]: I0320 21:20:31.110128 2220 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:20:31.110246 kubelet[2220]: I0320 21:20:31.110173 2220 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:20:31.118279 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:20:31.135069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:20:31.138235 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:20:31.147963 kubelet[2220]: I0320 21:20:31.147899 2220 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:20:31.148245 kubelet[2220]: I0320 21:20:31.148215 2220 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:20:31.148392 kubelet[2220]: I0320 21:20:31.148236 2220 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:20:31.148646 kubelet[2220]: I0320 21:20:31.148630 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:20:31.150211 kubelet[2220]: E0320 21:20:31.150174 2220 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:20:31.238386 systemd[1]: Created slice kubepods-burstable-pod996904c21121c141f58e5782ca614f29.slice - libcontainer container kubepods-burstable-pod996904c21121c141f58e5782ca614f29.slice. Mar 20 21:20:31.249955 kubelet[2220]: I0320 21:20:31.249867 2220 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:20:31.250217 kubelet[2220]: E0320 21:20:31.250190 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 20 21:20:31.250285 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 20 21:20:31.267999 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 20 21:20:31.304009 kubelet[2220]: I0320 21:20:31.303923 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:31.304191 kubelet[2220]: I0320 21:20:31.304005 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:31.304191 kubelet[2220]: I0320 21:20:31.304091 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:31.304191 kubelet[2220]: I0320 21:20:31.304115 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:31.304191 kubelet[2220]: I0320 21:20:31.304143 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:20:31.304191 kubelet[2220]: I0320 21:20:31.304160 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:31.304361 kubelet[2220]: I0320 21:20:31.304181 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:31.304361 kubelet[2220]: I0320 21:20:31.304212 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:31.304361 kubelet[2220]: I0320 21:20:31.304246 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:31.452026 kubelet[2220]: I0320 21:20:31.451884 2220 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:20:31.452480 kubelet[2220]: E0320 21:20:31.452233 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 20 21:20:31.506123 kubelet[2220]: E0320 21:20:31.506063 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Mar 20 21:20:31.548588 kubelet[2220]: E0320 21:20:31.548526 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:31.549458 containerd[1483]: time="2025-03-20T21:20:31.549414836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:996904c21121c141f58e5782ca614f29,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:31.565719 kubelet[2220]: E0320 21:20:31.565674 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:31.566299 containerd[1483]: time="2025-03-20T21:20:31.566249739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:31.571469 kubelet[2220]: E0320 21:20:31.571421 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:31.571857 containerd[1483]: time="2025-03-20T21:20:31.571822925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:31.720732 kubelet[2220]: W0320 21:20:31.720488 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:31.720732 kubelet[2220]: E0320 21:20:31.720653 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:31.825124 kubelet[2220]: W0320 21:20:31.825036 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:31.825254 kubelet[2220]: E0320 21:20:31.825134 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:31.838332 containerd[1483]: time="2025-03-20T21:20:31.838276030Z" level=info msg="connecting to shim 23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164" address="unix:///run/containerd/s/c1c8c05b44b7a6738cbb11132441d10267ca9113c3151ecfa43c88d5b3641df2" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:31.840882 containerd[1483]: time="2025-03-20T21:20:31.840773305Z" level=info msg="connecting to shim 201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638" address="unix:///run/containerd/s/52e18e3a238fbb0bd97a22eaaf07133df66288fcf32f099606c9a0010848b62c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:31.851287 containerd[1483]: time="2025-03-20T21:20:31.851008888Z" level=info msg="connecting to shim 0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10" address="unix:///run/containerd/s/1e4c347c578f58c8a79798c7376034025d3a6f0bf58cbc002c57a02a3d7110db" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:31.854392 kubelet[2220]: I0320 21:20:31.854354 2220 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:20:31.854754 kubelet[2220]: E0320 21:20:31.854724 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 20 21:20:31.912805 systemd[1]: Started cri-containerd-0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10.scope - libcontainer container 0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10. Mar 20 21:20:31.920167 systemd[1]: Started cri-containerd-201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638.scope - libcontainer container 201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638. Mar 20 21:20:31.922924 systemd[1]: Started cri-containerd-23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164.scope - libcontainer container 23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164. Mar 20 21:20:32.060761 containerd[1483]: time="2025-03-20T21:20:32.060624251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10\"" Mar 20 21:20:32.061928 kubelet[2220]: E0320 21:20:32.061890 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:32.063796 containerd[1483]: time="2025-03-20T21:20:32.063764342Z" level=info msg="CreateContainer within sandbox \"0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:20:32.081123 containerd[1483]: time="2025-03-20T21:20:32.081071472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164\"" Mar 20 21:20:32.081661 kubelet[2220]: E0320 21:20:32.081629 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:32.083195 containerd[1483]: time="2025-03-20T21:20:32.083155061Z" level=info msg="CreateContainer within sandbox \"23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:20:32.123967 containerd[1483]: time="2025-03-20T21:20:32.123922313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:996904c21121c141f58e5782ca614f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638\"" Mar 20 21:20:32.124399 kubelet[2220]: E0320 21:20:32.124381 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:32.126045 containerd[1483]: time="2025-03-20T21:20:32.126014839Z" level=info msg="CreateContainer within sandbox \"201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:20:32.138772 containerd[1483]: time="2025-03-20T21:20:32.138732558Z" level=info msg="Container 0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:32.151623 containerd[1483]: time="2025-03-20T21:20:32.151575121Z" level=info msg="Container 2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:32.152389 containerd[1483]: time="2025-03-20T21:20:32.152349875Z" level=info msg="CreateContainer within sandbox \"0cc5acfac1a40380ce8628a82740db9e4b4ff9ee92ff928693bf93df3b6d3a10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d\"" Mar 20 21:20:32.153405 containerd[1483]: time="2025-03-20T21:20:32.153372694Z" level=info msg="StartContainer for \"0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d\"" Mar 20 21:20:32.154950 containerd[1483]: time="2025-03-20T21:20:32.154898877Z" level=info msg="Container f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:32.155192 containerd[1483]: time="2025-03-20T21:20:32.155162411Z" level=info msg="connecting to shim 0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d" address="unix:///run/containerd/s/1e4c347c578f58c8a79798c7376034025d3a6f0bf58cbc002c57a02a3d7110db" protocol=ttrpc version=3 Mar 20 21:20:32.163344 containerd[1483]: time="2025-03-20T21:20:32.163280041Z" level=info msg="CreateContainer within sandbox \"201fdb7150cb51e90bff9180aa7fdb820546d4058710e35775f407c1dae47638\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209\"" Mar 20 21:20:32.163823 containerd[1483]: time="2025-03-20T21:20:32.163796089Z" level=info msg="StartContainer for \"f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209\"" Mar 20 21:20:32.163933 containerd[1483]: time="2025-03-20T21:20:32.163902378Z" level=info msg="CreateContainer within sandbox \"23864640ac14d3a8ad1723686a0c67cfa67e2b69540510a26201acda4ea10164\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f\"" Mar 20 21:20:32.165896 containerd[1483]: time="2025-03-20T21:20:32.165855092Z" level=info msg="StartContainer for \"2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f\"" Mar 20 21:20:32.168934 containerd[1483]: time="2025-03-20T21:20:32.167090349Z" level=info msg="connecting to shim f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209" address="unix:///run/containerd/s/52e18e3a238fbb0bd97a22eaaf07133df66288fcf32f099606c9a0010848b62c" protocol=ttrpc version=3 Mar 20 21:20:32.168934 containerd[1483]: time="2025-03-20T21:20:32.168259362Z" level=info msg="connecting to shim 2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f" address="unix:///run/containerd/s/c1c8c05b44b7a6738cbb11132441d10267ca9113c3151ecfa43c88d5b3641df2" protocol=ttrpc version=3 Mar 20 21:20:32.179802 systemd[1]: Started cri-containerd-0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d.scope - libcontainer container 0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d. Mar 20 21:20:32.199734 systemd[1]: Started cri-containerd-2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f.scope - libcontainer container 2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f. Mar 20 21:20:32.201311 systemd[1]: Started cri-containerd-f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209.scope - libcontainer container f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209. Mar 20 21:20:32.271302 kubelet[2220]: W0320 21:20:32.271160 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 20 21:20:32.271302 kubelet[2220]: E0320 21:20:32.271241 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:20:32.283728 containerd[1483]: time="2025-03-20T21:20:32.283580052Z" level=info msg="StartContainer for \"2d36ab69d86b95537011062e01573825c423f9b4c7c200edc6eee3467628bb7f\" returns successfully" Mar 20 21:20:32.293888 containerd[1483]: time="2025-03-20T21:20:32.293829431Z" level=info msg="StartContainer for \"0da22958f459ef8f6925b24c58fd16105d3f1ab9f40c0481a33d42c5458b1a6d\" returns successfully" Mar 20 21:20:32.294661 containerd[1483]: time="2025-03-20T21:20:32.294625805Z" level=info msg="StartContainer for \"f8c7a09dd10f20b4dfc0a7f1d0c96e5a6349e56c03129bcc9485fd62f2cb0209\" returns successfully" Mar 20 21:20:32.307388 kubelet[2220]: E0320 21:20:32.307331 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Mar 20 21:20:32.657129 kubelet[2220]: I0320 21:20:32.657065 2220 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:20:32.938044 kubelet[2220]: E0320 21:20:32.937923 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:32.939843 kubelet[2220]: E0320 21:20:32.939805 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:32.941718 kubelet[2220]: E0320 21:20:32.941701 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:33.738869 kubelet[2220]: I0320 21:20:33.738799 2220 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:20:33.738869 kubelet[2220]: E0320 21:20:33.738841 2220 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 20 21:20:33.750814 kubelet[2220]: E0320 21:20:33.750752 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:33.851491 kubelet[2220]: E0320 21:20:33.851432 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:33.944014 kubelet[2220]: E0320 21:20:33.943966 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:33.951990 kubelet[2220]: E0320 21:20:33.951958 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.052557 kubelet[2220]: E0320 21:20:34.052383 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.153259 kubelet[2220]: E0320 21:20:34.153191 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.253852 kubelet[2220]: E0320 21:20:34.253794 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.354846 kubelet[2220]: E0320 21:20:34.354707 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.455785 kubelet[2220]: E0320 21:20:34.455734 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.556474 kubelet[2220]: E0320 21:20:34.556407 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.657261 kubelet[2220]: E0320 21:20:34.657103 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.757753 kubelet[2220]: E0320 21:20:34.757693 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:34.858292 kubelet[2220]: E0320 21:20:34.858240 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:35.651035 systemd[1]: Reload requested from client PID 2498 ('systemctl') (unit session-7.scope)... Mar 20 21:20:35.651053 systemd[1]: Reloading... Mar 20 21:20:35.749639 zram_generator::config[2545]: No configuration found. Mar 20 21:20:35.862095 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:20:35.894535 kubelet[2220]: I0320 21:20:35.894497 2220 apiserver.go:52] "Watching apiserver" Mar 20 21:20:35.901949 kubelet[2220]: I0320 21:20:35.901869 2220 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:20:35.981350 systemd[1]: Reloading finished in 329 ms. Mar 20 21:20:36.002253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:36.029327 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:20:36.029738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:36.029805 systemd[1]: kubelet.service: Consumed 1.184s CPU time, 117.5M memory peak. Mar 20 21:20:36.032009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:20:36.221554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:20:36.231999 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:20:36.321681 kubelet[2587]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:20:36.321681 kubelet[2587]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:20:36.321681 kubelet[2587]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:20:36.322126 kubelet[2587]: I0320 21:20:36.321726 2587 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:20:36.328031 kubelet[2587]: I0320 21:20:36.327990 2587 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:20:36.328031 kubelet[2587]: I0320 21:20:36.328019 2587 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:20:36.328245 kubelet[2587]: I0320 21:20:36.328222 2587 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:20:36.329521 kubelet[2587]: I0320 21:20:36.329495 2587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:20:36.331418 kubelet[2587]: I0320 21:20:36.331377 2587 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:20:36.334974 kubelet[2587]: I0320 21:20:36.334927 2587 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:20:36.340931 kubelet[2587]: I0320 21:20:36.340898 2587 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:20:36.341084 kubelet[2587]: I0320 21:20:36.341064 2587 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:20:36.341288 kubelet[2587]: I0320 21:20:36.341256 2587 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:20:36.341508 kubelet[2587]: I0320 21:20:36.341286 2587 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:20:36.341508 kubelet[2587]: I0320 21:20:36.341509 2587 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:20:36.341722 kubelet[2587]: I0320 21:20:36.341521 2587 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:20:36.341722 kubelet[2587]: I0320 21:20:36.341563 2587 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:20:36.341722 kubelet[2587]: I0320 21:20:36.341720 2587 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:20:36.341819 kubelet[2587]: I0320 21:20:36.341739 2587 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:20:36.341819 kubelet[2587]: I0320 21:20:36.341782 2587 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:20:36.341819 kubelet[2587]: I0320 21:20:36.341808 2587 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:20:36.343677 kubelet[2587]: I0320 21:20:36.342694 2587 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:20:36.343677 kubelet[2587]: I0320 21:20:36.343041 2587 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:20:36.343677 kubelet[2587]: I0320 21:20:36.343462 2587 server.go:1269] "Started kubelet" Mar 20 21:20:36.345428 kubelet[2587]: I0320 21:20:36.344151 2587 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:20:36.345540 kubelet[2587]: I0320 21:20:36.345517 2587 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:20:36.347619 kubelet[2587]: I0320 21:20:36.346787 2587 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:20:36.348917 kubelet[2587]: I0320 21:20:36.345424 2587 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:20:36.348917 kubelet[2587]: I0320 21:20:36.348216 2587 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:20:36.348917 kubelet[2587]: I0320 21:20:36.345182 2587 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:20:36.350619 kubelet[2587]: I0320 21:20:36.350582 2587 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:20:36.351486 kubelet[2587]: I0320 21:20:36.351466 2587 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:20:36.351730 kubelet[2587]: I0320 21:20:36.351714 2587 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:20:36.352098 kubelet[2587]: E0320 21:20:36.352076 2587 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:20:36.354514 kubelet[2587]: I0320 21:20:36.354478 2587 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:20:36.354644 kubelet[2587]: I0320 21:20:36.354617 2587 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:20:36.357036 kubelet[2587]: E0320 21:20:36.355539 2587 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:20:36.358034 kubelet[2587]: I0320 21:20:36.358002 2587 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:20:36.366360 kubelet[2587]: I0320 21:20:36.365367 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:20:36.367647 kubelet[2587]: I0320 21:20:36.367618 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:20:36.367764 kubelet[2587]: I0320 21:20:36.367749 2587 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:20:36.367856 kubelet[2587]: I0320 21:20:36.367834 2587 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:20:36.367930 kubelet[2587]: E0320 21:20:36.367901 2587 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:20:36.401454 kubelet[2587]: I0320 21:20:36.401404 2587 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:20:36.401454 kubelet[2587]: I0320 21:20:36.401432 2587 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.401474 2587 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.401729 2587 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.401743 2587 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.401764 2587 policy_none.go:49] "None policy: Start" Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.402423 2587 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:20:36.402507 kubelet[2587]: I0320 21:20:36.402455 2587 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:20:36.402981 kubelet[2587]: I0320 21:20:36.402649 2587 state_mem.go:75] "Updated machine memory state" Mar 20 21:20:36.408347 kubelet[2587]: I0320 21:20:36.408174 2587 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:20:36.408710 kubelet[2587]: I0320 21:20:36.408661 2587 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:20:36.408831 kubelet[2587]: I0320 21:20:36.408799 2587 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:20:36.409212 kubelet[2587]: I0320 21:20:36.409195 2587 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:20:36.514317 kubelet[2587]: I0320 21:20:36.514193 2587 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:20:36.553417 kubelet[2587]: I0320 21:20:36.553385 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:20:36.553417 kubelet[2587]: I0320 21:20:36.553417 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:36.553580 kubelet[2587]: I0320 21:20:36.553442 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:36.553580 kubelet[2587]: I0320 21:20:36.553458 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:36.553580 kubelet[2587]: I0320 21:20:36.553475 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:36.553580 kubelet[2587]: I0320 21:20:36.553489 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:36.553580 kubelet[2587]: I0320 21:20:36.553512 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:36.553723 kubelet[2587]: I0320 21:20:36.553562 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/996904c21121c141f58e5782ca614f29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"996904c21121c141f58e5782ca614f29\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:20:36.553723 kubelet[2587]: I0320 21:20:36.553618 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:20:36.609124 kubelet[2587]: I0320 21:20:36.609082 2587 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 20 21:20:36.609259 kubelet[2587]: I0320 21:20:36.609169 2587 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:20:36.775522 kubelet[2587]: E0320 21:20:36.775301 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:36.793735 kubelet[2587]: E0320 21:20:36.793569 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:36.793735 kubelet[2587]: E0320 21:20:36.793650 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:37.342778 kubelet[2587]: I0320 21:20:37.342717 2587 apiserver.go:52] "Watching apiserver" Mar 20 21:20:37.352368 kubelet[2587]: I0320 21:20:37.352330 2587 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:20:37.380959 kubelet[2587]: E0320 21:20:37.380758 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:37.380959 kubelet[2587]: E0320 21:20:37.380774 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:37.380959 kubelet[2587]: E0320 21:20:37.380858 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:37.399931 kubelet[2587]: I0320 21:20:37.399867 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.399845842 podStartE2EDuration="1.399845842s" podCreationTimestamp="2025-03-20 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:20:37.399835723 +0000 UTC m=+1.160142942" watchObservedRunningTime="2025-03-20 21:20:37.399845842 +0000 UTC m=+1.160153051" Mar 20 21:20:37.412314 kubelet[2587]: I0320 21:20:37.412246 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.412228362 podStartE2EDuration="1.412228362s" podCreationTimestamp="2025-03-20 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:20:37.407006736 +0000 UTC m=+1.167313945" watchObservedRunningTime="2025-03-20 21:20:37.412228362 +0000 UTC m=+1.172535571" Mar 20 21:20:38.384064 kubelet[2587]: E0320 21:20:38.384024 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:39.970556 kubelet[2587]: E0320 21:20:39.970488 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:40.968148 kubelet[2587]: I0320 21:20:40.968079 2587 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:20:40.969226 containerd[1483]: time="2025-03-20T21:20:40.968973654Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:20:40.969648 kubelet[2587]: I0320 21:20:40.969352 2587 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:20:41.278763 sudo[1681]: pam_unix(sudo:session): session closed for user root Mar 20 21:20:41.280257 sshd[1680]: Connection closed by 10.0.0.1 port 59978 Mar 20 21:20:41.280897 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Mar 20 21:20:41.286111 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:59978.service: Deactivated successfully. Mar 20 21:20:41.289074 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:20:41.289337 systemd[1]: session-7.scope: Consumed 4.907s CPU time, 216.2M memory peak. Mar 20 21:20:41.290932 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:20:41.292020 systemd-logind[1460]: Removed session 7. Mar 20 21:20:41.849832 kubelet[2587]: I0320 21:20:41.849740 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.849686459 podStartE2EDuration="5.849686459s" podCreationTimestamp="2025-03-20 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:20:37.41234979 +0000 UTC m=+1.172656999" watchObservedRunningTime="2025-03-20 21:20:41.849686459 +0000 UTC m=+5.609993658" Mar 20 21:20:41.860133 systemd[1]: Created slice kubepods-besteffort-podcb1bfaf2_5046_4873_ba4b_f10fa390b815.slice - libcontainer container kubepods-besteffort-podcb1bfaf2_5046_4873_ba4b_f10fa390b815.slice. Mar 20 21:20:41.898027 kubelet[2587]: I0320 21:20:41.897963 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb1bfaf2-5046-4873-ba4b-f10fa390b815-xtables-lock\") pod \"kube-proxy-j2vcm\" (UID: \"cb1bfaf2-5046-4873-ba4b-f10fa390b815\") " pod="kube-system/kube-proxy-j2vcm" Mar 20 21:20:41.898027 kubelet[2587]: I0320 21:20:41.898026 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6zm\" (UniqueName: \"kubernetes.io/projected/cb1bfaf2-5046-4873-ba4b-f10fa390b815-kube-api-access-wm6zm\") pod \"kube-proxy-j2vcm\" (UID: \"cb1bfaf2-5046-4873-ba4b-f10fa390b815\") " pod="kube-system/kube-proxy-j2vcm" Mar 20 21:20:41.898229 kubelet[2587]: I0320 21:20:41.898090 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb1bfaf2-5046-4873-ba4b-f10fa390b815-kube-proxy\") pod \"kube-proxy-j2vcm\" (UID: \"cb1bfaf2-5046-4873-ba4b-f10fa390b815\") " pod="kube-system/kube-proxy-j2vcm" Mar 20 21:20:41.898229 kubelet[2587]: I0320 21:20:41.898121 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb1bfaf2-5046-4873-ba4b-f10fa390b815-lib-modules\") pod \"kube-proxy-j2vcm\" (UID: \"cb1bfaf2-5046-4873-ba4b-f10fa390b815\") " pod="kube-system/kube-proxy-j2vcm" Mar 20 21:20:42.120936 systemd[1]: Created slice kubepods-besteffort-pod229c7b51_01e9_4874_afc1_55363378436c.slice - libcontainer container kubepods-besteffort-pod229c7b51_01e9_4874_afc1_55363378436c.slice. Mar 20 21:20:42.172811 kubelet[2587]: E0320 21:20:42.172740 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:42.174019 containerd[1483]: time="2025-03-20T21:20:42.173983357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2vcm,Uid:cb1bfaf2-5046-4873-ba4b-f10fa390b815,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:42.199476 kubelet[2587]: I0320 21:20:42.199421 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/229c7b51-01e9-4874-afc1-55363378436c-var-lib-calico\") pod \"tigera-operator-64ff5465b7-mn29r\" (UID: \"229c7b51-01e9-4874-afc1-55363378436c\") " pod="tigera-operator/tigera-operator-64ff5465b7-mn29r" Mar 20 21:20:42.199476 kubelet[2587]: I0320 21:20:42.199461 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl67\" (UniqueName: \"kubernetes.io/projected/229c7b51-01e9-4874-afc1-55363378436c-kube-api-access-2wl67\") pod \"tigera-operator-64ff5465b7-mn29r\" (UID: \"229c7b51-01e9-4874-afc1-55363378436c\") " pod="tigera-operator/tigera-operator-64ff5465b7-mn29r" Mar 20 21:20:42.217427 containerd[1483]: time="2025-03-20T21:20:42.217359658Z" level=info msg="connecting to shim 98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99" address="unix:///run/containerd/s/282c900edc3d3178ebc217e945968a7424b06f1b9b820db21760b1636e6e1576" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:42.265785 systemd[1]: Started cri-containerd-98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99.scope - libcontainer container 98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99. Mar 20 21:20:42.292094 containerd[1483]: time="2025-03-20T21:20:42.292036223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2vcm,Uid:cb1bfaf2-5046-4873-ba4b-f10fa390b815,Namespace:kube-system,Attempt:0,} returns sandbox id \"98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99\"" Mar 20 21:20:42.293058 kubelet[2587]: E0320 21:20:42.293005 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:42.295400 containerd[1483]: time="2025-03-20T21:20:42.295354459Z" level=info msg="CreateContainer within sandbox \"98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:20:42.309243 containerd[1483]: time="2025-03-20T21:20:42.309178174Z" level=info msg="Container c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:42.318376 containerd[1483]: time="2025-03-20T21:20:42.318322213Z" level=info msg="CreateContainer within sandbox \"98eaaa3da9ef2ff982bf62a179ae4f173cbf2ebb10975ebc0a272eb8689ffe99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367\"" Mar 20 21:20:42.318961 containerd[1483]: time="2025-03-20T21:20:42.318927428Z" level=info msg="StartContainer for \"c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367\"" Mar 20 21:20:42.320642 containerd[1483]: time="2025-03-20T21:20:42.320610702Z" level=info msg="connecting to shim c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367" address="unix:///run/containerd/s/282c900edc3d3178ebc217e945968a7424b06f1b9b820db21760b1636e6e1576" protocol=ttrpc version=3 Mar 20 21:20:42.349823 systemd[1]: Started cri-containerd-c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367.scope - libcontainer container c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367. Mar 20 21:20:42.399369 containerd[1483]: time="2025-03-20T21:20:42.399082183Z" level=info msg="StartContainer for \"c433bfa3d0c48a9e3db2eabee911bf76e2f4192d57c92ec9663a84812f797367\" returns successfully" Mar 20 21:20:42.424877 containerd[1483]: time="2025-03-20T21:20:42.424818082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-mn29r,Uid:229c7b51-01e9-4874-afc1-55363378436c,Namespace:tigera-operator,Attempt:0,}" Mar 20 21:20:42.456658 containerd[1483]: time="2025-03-20T21:20:42.456559669Z" level=info msg="connecting to shim d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28" address="unix:///run/containerd/s/ea495d8b8bd873f6a0f46568f4972dd7cee7cda918886b030be71dbb218b63c7" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:42.487791 systemd[1]: Started cri-containerd-d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28.scope - libcontainer container d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28. Mar 20 21:20:42.540306 containerd[1483]: time="2025-03-20T21:20:42.540263911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-mn29r,Uid:229c7b51-01e9-4874-afc1-55363378436c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28\"" Mar 20 21:20:42.542575 containerd[1483]: time="2025-03-20T21:20:42.542529166Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 20 21:20:43.055316 kubelet[2587]: E0320 21:20:43.055200 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:43.398192 kubelet[2587]: E0320 21:20:43.397985 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:43.398192 kubelet[2587]: E0320 21:20:43.398121 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:43.414438 kubelet[2587]: I0320 21:20:43.414352 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2vcm" podStartSLOduration=2.414328196 podStartE2EDuration="2.414328196s" podCreationTimestamp="2025-03-20 21:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:20:43.414097647 +0000 UTC m=+7.174404856" watchObservedRunningTime="2025-03-20 21:20:43.414328196 +0000 UTC m=+7.174635416" Mar 20 21:20:43.993195 kubelet[2587]: E0320 21:20:43.993143 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:44.404053 kubelet[2587]: E0320 21:20:44.403875 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:44.407514 kubelet[2587]: E0320 21:20:44.404880 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:44.602827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709689963.mount: Deactivated successfully. Mar 20 21:20:44.923652 containerd[1483]: time="2025-03-20T21:20:44.923566927Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:44.924467 containerd[1483]: time="2025-03-20T21:20:44.924374295Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 20 21:20:44.925484 containerd[1483]: time="2025-03-20T21:20:44.925437441Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:44.927416 containerd[1483]: time="2025-03-20T21:20:44.927371055Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:44.928049 containerd[1483]: time="2025-03-20T21:20:44.928007287Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.385437333s" Mar 20 21:20:44.928049 containerd[1483]: time="2025-03-20T21:20:44.928040080Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 20 21:20:44.930504 containerd[1483]: time="2025-03-20T21:20:44.930454440Z" level=info msg="CreateContainer within sandbox \"d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 20 21:20:44.939391 containerd[1483]: time="2025-03-20T21:20:44.939337364Z" level=info msg="Container fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:44.943143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229731930.mount: Deactivated successfully. Mar 20 21:20:44.946183 containerd[1483]: time="2025-03-20T21:20:44.946132402Z" level=info msg="CreateContainer within sandbox \"d64a2efa9dabfc2d4e5a002445d4b8c2f3c80b2c90a4f71838c435ac964d9f28\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289\"" Mar 20 21:20:44.946843 containerd[1483]: time="2025-03-20T21:20:44.946776620Z" level=info msg="StartContainer for \"fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289\"" Mar 20 21:20:44.947741 containerd[1483]: time="2025-03-20T21:20:44.947711931Z" level=info msg="connecting to shim fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289" address="unix:///run/containerd/s/ea495d8b8bd873f6a0f46568f4972dd7cee7cda918886b030be71dbb218b63c7" protocol=ttrpc version=3 Mar 20 21:20:44.972835 systemd[1]: Started cri-containerd-fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289.scope - libcontainer container fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289. Mar 20 21:20:45.008756 containerd[1483]: time="2025-03-20T21:20:45.008703267Z" level=info msg="StartContainer for \"fc0b5d50d3c8f23a5f69cd9c8994765ff44a577fdd13df35448854e7817a6289\" returns successfully" Mar 20 21:20:45.407827 kubelet[2587]: E0320 21:20:45.407660 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:45.418086 kubelet[2587]: I0320 21:20:45.417817 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-mn29r" podStartSLOduration=1.030961049 podStartE2EDuration="3.417791028s" podCreationTimestamp="2025-03-20 21:20:42 +0000 UTC" firstStartedPulling="2025-03-20 21:20:42.542095808 +0000 UTC m=+6.302403017" lastFinishedPulling="2025-03-20 21:20:44.928925787 +0000 UTC m=+8.689232996" observedRunningTime="2025-03-20 21:20:45.417674937 +0000 UTC m=+9.177982166" watchObservedRunningTime="2025-03-20 21:20:45.417791028 +0000 UTC m=+9.178098237" Mar 20 21:20:48.008052 systemd[1]: Created slice kubepods-besteffort-pod939943d1_3454_4450_b614_3a5c7f020c91.slice - libcontainer container kubepods-besteffort-pod939943d1_3454_4450_b614_3a5c7f020c91.slice. Mar 20 21:20:48.028047 systemd[1]: Created slice kubepods-besteffort-pod05eb6a36_c5ad_4965_ac6b_8902da8d887b.slice - libcontainer container kubepods-besteffort-pod05eb6a36_c5ad_4965_ac6b_8902da8d887b.slice. Mar 20 21:20:48.037501 kubelet[2587]: I0320 21:20:48.037385 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-cni-net-dir\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038158 kubelet[2587]: I0320 21:20:48.037524 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939943d1-3454-4450-b614-3a5c7f020c91-tigera-ca-bundle\") pod \"calico-typha-66c5496894-hwcwh\" (UID: \"939943d1-3454-4450-b614-3a5c7f020c91\") " pod="calico-system/calico-typha-66c5496894-hwcwh" Mar 20 21:20:48.038158 kubelet[2587]: I0320 21:20:48.037658 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-policysync\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038158 kubelet[2587]: I0320 21:20:48.037735 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-var-run-calico\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038158 kubelet[2587]: I0320 21:20:48.037760 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-flexvol-driver-host\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038158 kubelet[2587]: I0320 21:20:48.037822 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/939943d1-3454-4450-b614-3a5c7f020c91-typha-certs\") pod \"calico-typha-66c5496894-hwcwh\" (UID: \"939943d1-3454-4450-b614-3a5c7f020c91\") " pod="calico-system/calico-typha-66c5496894-hwcwh" Mar 20 21:20:48.038348 kubelet[2587]: I0320 21:20:48.037845 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-lib-modules\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038348 kubelet[2587]: I0320 21:20:48.037880 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-cni-bin-dir\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038348 kubelet[2587]: I0320 21:20:48.037903 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzsm\" (UniqueName: \"kubernetes.io/projected/939943d1-3454-4450-b614-3a5c7f020c91-kube-api-access-xgzsm\") pod \"calico-typha-66c5496894-hwcwh\" (UID: \"939943d1-3454-4450-b614-3a5c7f020c91\") " pod="calico-system/calico-typha-66c5496894-hwcwh" Mar 20 21:20:48.038348 kubelet[2587]: I0320 21:20:48.037924 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05eb6a36-c5ad-4965-ac6b-8902da8d887b-node-certs\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038348 kubelet[2587]: I0320 21:20:48.037946 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-var-lib-calico\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038531 kubelet[2587]: I0320 21:20:48.037972 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-xtables-lock\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038531 kubelet[2587]: I0320 21:20:48.037994 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05eb6a36-c5ad-4965-ac6b-8902da8d887b-tigera-ca-bundle\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038531 kubelet[2587]: I0320 21:20:48.038017 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05eb6a36-c5ad-4965-ac6b-8902da8d887b-cni-log-dir\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.038531 kubelet[2587]: I0320 21:20:48.038036 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqsx\" (UniqueName: \"kubernetes.io/projected/05eb6a36-c5ad-4965-ac6b-8902da8d887b-kube-api-access-hbqsx\") pod \"calico-node-hsg2l\" (UID: \"05eb6a36-c5ad-4965-ac6b-8902da8d887b\") " pod="calico-system/calico-node-hsg2l" Mar 20 21:20:48.083228 kubelet[2587]: E0320 21:20:48.082741 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:48.138818 kubelet[2587]: I0320 21:20:48.138747 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56991647-fb34-46d6-857a-df4e1a226084-varrun\") pod \"csi-node-driver-rrs62\" (UID: \"56991647-fb34-46d6-857a-df4e1a226084\") " pod="calico-system/csi-node-driver-rrs62" Mar 20 21:20:48.139020 kubelet[2587]: I0320 21:20:48.138824 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xpk\" (UniqueName: \"kubernetes.io/projected/56991647-fb34-46d6-857a-df4e1a226084-kube-api-access-44xpk\") pod \"csi-node-driver-rrs62\" (UID: \"56991647-fb34-46d6-857a-df4e1a226084\") " pod="calico-system/csi-node-driver-rrs62" Mar 20 21:20:48.139020 kubelet[2587]: I0320 21:20:48.138872 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56991647-fb34-46d6-857a-df4e1a226084-socket-dir\") pod \"csi-node-driver-rrs62\" (UID: \"56991647-fb34-46d6-857a-df4e1a226084\") " pod="calico-system/csi-node-driver-rrs62" Mar 20 21:20:48.139020 kubelet[2587]: I0320 21:20:48.138941 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56991647-fb34-46d6-857a-df4e1a226084-registration-dir\") pod \"csi-node-driver-rrs62\" (UID: \"56991647-fb34-46d6-857a-df4e1a226084\") " pod="calico-system/csi-node-driver-rrs62" Mar 20 21:20:48.139020 kubelet[2587]: I0320 21:20:48.138975 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56991647-fb34-46d6-857a-df4e1a226084-kubelet-dir\") pod \"csi-node-driver-rrs62\" (UID: \"56991647-fb34-46d6-857a-df4e1a226084\") " pod="calico-system/csi-node-driver-rrs62" Mar 20 21:20:48.149320 kubelet[2587]: E0320 21:20:48.149068 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.149320 kubelet[2587]: W0320 21:20:48.149095 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.149320 kubelet[2587]: E0320 21:20:48.149125 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.151198 kubelet[2587]: E0320 21:20:48.150992 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.151198 kubelet[2587]: W0320 21:20:48.151136 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.151198 kubelet[2587]: E0320 21:20:48.151163 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.154890 kubelet[2587]: E0320 21:20:48.154459 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.154890 kubelet[2587]: W0320 21:20:48.154488 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.154890 kubelet[2587]: E0320 21:20:48.154547 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.156713 kubelet[2587]: E0320 21:20:48.156666 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.156713 kubelet[2587]: W0320 21:20:48.156713 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.156821 kubelet[2587]: E0320 21:20:48.156729 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.239584 kubelet[2587]: E0320 21:20:48.239540 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.239584 kubelet[2587]: W0320 21:20:48.239566 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.239584 kubelet[2587]: E0320 21:20:48.239590 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.239937 kubelet[2587]: E0320 21:20:48.239917 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.239937 kubelet[2587]: W0320 21:20:48.239934 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.240013 kubelet[2587]: E0320 21:20:48.239955 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.240229 kubelet[2587]: E0320 21:20:48.240208 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.240229 kubelet[2587]: W0320 21:20:48.240224 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.240291 kubelet[2587]: E0320 21:20:48.240242 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.240668 kubelet[2587]: E0320 21:20:48.240646 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.240668 kubelet[2587]: W0320 21:20:48.240663 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.240874 kubelet[2587]: E0320 21:20:48.240683 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.240977 kubelet[2587]: E0320 21:20:48.240950 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.240977 kubelet[2587]: W0320 21:20:48.240963 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.241154 kubelet[2587]: E0320 21:20:48.241042 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.241154 kubelet[2587]: E0320 21:20:48.241142 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.241154 kubelet[2587]: W0320 21:20:48.241153 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.241223 kubelet[2587]: E0320 21:20:48.241170 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.241412 kubelet[2587]: E0320 21:20:48.241396 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.241412 kubelet[2587]: W0320 21:20:48.241408 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.241480 kubelet[2587]: E0320 21:20:48.241447 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.241761 kubelet[2587]: E0320 21:20:48.241724 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.241761 kubelet[2587]: W0320 21:20:48.241739 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.241919 kubelet[2587]: E0320 21:20:48.241836 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.241970 kubelet[2587]: E0320 21:20:48.241961 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.241996 kubelet[2587]: W0320 21:20:48.241974 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.242018 kubelet[2587]: E0320 21:20:48.241995 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.242264 kubelet[2587]: E0320 21:20:48.242248 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.242304 kubelet[2587]: W0320 21:20:48.242264 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.242304 kubelet[2587]: E0320 21:20:48.242286 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.242530 kubelet[2587]: E0320 21:20:48.242513 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.242530 kubelet[2587]: W0320 21:20:48.242528 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.242592 kubelet[2587]: E0320 21:20:48.242563 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.242772 kubelet[2587]: E0320 21:20:48.242757 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.242772 kubelet[2587]: W0320 21:20:48.242772 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.242908 kubelet[2587]: E0320 21:20:48.242890 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.243014 kubelet[2587]: E0320 21:20:48.243001 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.243048 kubelet[2587]: W0320 21:20:48.243014 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.243083 kubelet[2587]: E0320 21:20:48.243053 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.243317 kubelet[2587]: E0320 21:20:48.243284 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.243317 kubelet[2587]: W0320 21:20:48.243298 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.243431 kubelet[2587]: E0320 21:20:48.243349 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.243570 kubelet[2587]: E0320 21:20:48.243554 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.243621 kubelet[2587]: W0320 21:20:48.243569 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.243644 kubelet[2587]: E0320 21:20:48.243628 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.244231 kubelet[2587]: E0320 21:20:48.244213 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.244279 kubelet[2587]: W0320 21:20:48.244230 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.244279 kubelet[2587]: E0320 21:20:48.244274 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.244579 kubelet[2587]: E0320 21:20:48.244558 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.244579 kubelet[2587]: W0320 21:20:48.244575 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.244681 kubelet[2587]: E0320 21:20:48.244628 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.244902 kubelet[2587]: E0320 21:20:48.244873 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.244902 kubelet[2587]: W0320 21:20:48.244887 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.244983 kubelet[2587]: E0320 21:20:48.244925 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.245147 kubelet[2587]: E0320 21:20:48.245115 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.245147 kubelet[2587]: W0320 21:20:48.245130 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.245217 kubelet[2587]: E0320 21:20:48.245165 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.245377 kubelet[2587]: E0320 21:20:48.245354 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.245454 kubelet[2587]: W0320 21:20:48.245375 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.245454 kubelet[2587]: E0320 21:20:48.245409 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.246222 kubelet[2587]: E0320 21:20:48.246136 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.246222 kubelet[2587]: W0320 21:20:48.246154 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.246222 kubelet[2587]: E0320 21:20:48.246184 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.246649 kubelet[2587]: E0320 21:20:48.246629 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.246649 kubelet[2587]: W0320 21:20:48.246647 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.246956 kubelet[2587]: E0320 21:20:48.246667 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.246956 kubelet[2587]: E0320 21:20:48.246931 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.246956 kubelet[2587]: W0320 21:20:48.246940 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.246956 kubelet[2587]: E0320 21:20:48.246955 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.247243 kubelet[2587]: E0320 21:20:48.247219 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.247243 kubelet[2587]: W0320 21:20:48.247234 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.247243 kubelet[2587]: E0320 21:20:48.247249 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.247582 kubelet[2587]: E0320 21:20:48.247563 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.247582 kubelet[2587]: W0320 21:20:48.247576 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.247737 kubelet[2587]: E0320 21:20:48.247588 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.256198 kubelet[2587]: E0320 21:20:48.256154 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:48.256198 kubelet[2587]: W0320 21:20:48.256175 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:48.256198 kubelet[2587]: E0320 21:20:48.256200 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:48.320370 kubelet[2587]: E0320 21:20:48.320202 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:48.321198 containerd[1483]: time="2025-03-20T21:20:48.321069393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c5496894-hwcwh,Uid:939943d1-3454-4450-b614-3a5c7f020c91,Namespace:calico-system,Attempt:0,}" Mar 20 21:20:48.333193 kubelet[2587]: E0320 21:20:48.333146 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:48.333707 containerd[1483]: time="2025-03-20T21:20:48.333659457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hsg2l,Uid:05eb6a36-c5ad-4965-ac6b-8902da8d887b,Namespace:calico-system,Attempt:0,}" Mar 20 21:20:48.430391 containerd[1483]: time="2025-03-20T21:20:48.430331597Z" level=info msg="connecting to shim 071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041" address="unix:///run/containerd/s/dc94227da14924bf59cf5f0d559f841268ee10026286dc9c0afcbef4a945c221" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:48.434683 containerd[1483]: time="2025-03-20T21:20:48.434620670Z" level=info msg="connecting to shim 4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109" address="unix:///run/containerd/s/05cb1025642afeb2f882416624fd3de7d3655c28c3c98b40f6568c5fcd48e863" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:20:48.465743 systemd[1]: Started cri-containerd-071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041.scope - libcontainer container 071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041. Mar 20 21:20:48.469890 systemd[1]: Started cri-containerd-4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109.scope - libcontainer container 4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109. Mar 20 21:20:48.506426 containerd[1483]: time="2025-03-20T21:20:48.506376792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hsg2l,Uid:05eb6a36-c5ad-4965-ac6b-8902da8d887b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\"" Mar 20 21:20:48.507624 kubelet[2587]: E0320 21:20:48.507385 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:48.508212 containerd[1483]: time="2025-03-20T21:20:48.508185887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 20 21:20:48.518163 containerd[1483]: time="2025-03-20T21:20:48.518114939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c5496894-hwcwh,Uid:939943d1-3454-4450-b614-3a5c7f020c91,Namespace:calico-system,Attempt:0,} returns sandbox id \"071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041\"" Mar 20 21:20:48.518787 kubelet[2587]: E0320 21:20:48.518764 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:49.976542 kubelet[2587]: E0320 21:20:49.976462 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:50.033182 kubelet[2587]: E0320 21:20:50.033122 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.033182 kubelet[2587]: W0320 21:20:50.033159 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.033182 kubelet[2587]: E0320 21:20:50.033190 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.033890 kubelet[2587]: E0320 21:20:50.033789 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.033890 kubelet[2587]: W0320 21:20:50.033805 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.033890 kubelet[2587]: E0320 21:20:50.033829 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.034213 kubelet[2587]: E0320 21:20:50.034194 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.034213 kubelet[2587]: W0320 21:20:50.034208 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.034295 kubelet[2587]: E0320 21:20:50.034221 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.036208 kubelet[2587]: E0320 21:20:50.036175 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.036208 kubelet[2587]: W0320 21:20:50.036192 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.036208 kubelet[2587]: E0320 21:20:50.036205 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.036667 kubelet[2587]: E0320 21:20:50.036579 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.036667 kubelet[2587]: W0320 21:20:50.036656 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.036783 kubelet[2587]: E0320 21:20:50.036694 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.037043 kubelet[2587]: E0320 21:20:50.037023 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.037043 kubelet[2587]: W0320 21:20:50.037039 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.037127 kubelet[2587]: E0320 21:20:50.037053 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.037371 kubelet[2587]: E0320 21:20:50.037353 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.037458 kubelet[2587]: W0320 21:20:50.037435 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.037526 kubelet[2587]: E0320 21:20:50.037459 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.037781 kubelet[2587]: E0320 21:20:50.037765 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.037781 kubelet[2587]: W0320 21:20:50.037780 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.037877 kubelet[2587]: E0320 21:20:50.037791 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.038179 kubelet[2587]: E0320 21:20:50.038163 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.038179 kubelet[2587]: W0320 21:20:50.038177 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.038279 kubelet[2587]: E0320 21:20:50.038188 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.038514 kubelet[2587]: E0320 21:20:50.038489 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.038514 kubelet[2587]: W0320 21:20:50.038512 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.038625 kubelet[2587]: E0320 21:20:50.038524 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.039003 kubelet[2587]: E0320 21:20:50.038947 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.039003 kubelet[2587]: W0320 21:20:50.038961 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.039003 kubelet[2587]: E0320 21:20:50.038973 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.039443 kubelet[2587]: E0320 21:20:50.039301 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.039443 kubelet[2587]: W0320 21:20:50.039318 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.039443 kubelet[2587]: E0320 21:20:50.039330 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.039766 kubelet[2587]: E0320 21:20:50.039749 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.039954 kubelet[2587]: W0320 21:20:50.039834 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.039954 kubelet[2587]: E0320 21:20:50.039851 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.040118 kubelet[2587]: E0320 21:20:50.040100 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.040118 kubelet[2587]: W0320 21:20:50.040114 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.040191 kubelet[2587]: E0320 21:20:50.040160 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.040449 kubelet[2587]: E0320 21:20:50.040432 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.040449 kubelet[2587]: W0320 21:20:50.040445 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.040541 kubelet[2587]: E0320 21:20:50.040457 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.040803 kubelet[2587]: E0320 21:20:50.040782 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.040803 kubelet[2587]: W0320 21:20:50.040794 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.040893 kubelet[2587]: E0320 21:20:50.040805 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.041260 kubelet[2587]: E0320 21:20:50.041106 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.041260 kubelet[2587]: W0320 21:20:50.041120 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.041260 kubelet[2587]: E0320 21:20:50.041132 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.041494 kubelet[2587]: E0320 21:20:50.041457 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.041736 kubelet[2587]: W0320 21:20:50.041634 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.041736 kubelet[2587]: E0320 21:20:50.041654 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.042108 kubelet[2587]: E0320 21:20:50.042042 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.042108 kubelet[2587]: W0320 21:20:50.042055 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.042108 kubelet[2587]: E0320 21:20:50.042068 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.042749 kubelet[2587]: E0320 21:20:50.042432 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.042749 kubelet[2587]: W0320 21:20:50.042447 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.042749 kubelet[2587]: E0320 21:20:50.042459 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.043304 kubelet[2587]: E0320 21:20:50.043147 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.043304 kubelet[2587]: W0320 21:20:50.043162 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.043304 kubelet[2587]: E0320 21:20:50.043175 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.043747 kubelet[2587]: E0320 21:20:50.043569 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.043747 kubelet[2587]: W0320 21:20:50.043626 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.043747 kubelet[2587]: E0320 21:20:50.043641 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.043924 kubelet[2587]: E0320 21:20:50.043909 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.043995 kubelet[2587]: W0320 21:20:50.043981 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.044058 kubelet[2587]: E0320 21:20:50.044046 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.044857 kubelet[2587]: E0320 21:20:50.044838 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.045665 kubelet[2587]: W0320 21:20:50.045481 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.045665 kubelet[2587]: E0320 21:20:50.045515 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.046017 kubelet[2587]: E0320 21:20:50.046002 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:20:50.046172 kubelet[2587]: W0320 21:20:50.046083 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:20:50.046172 kubelet[2587]: E0320 21:20:50.046097 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:20:50.077284 containerd[1483]: time="2025-03-20T21:20:50.077195515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:50.078158 containerd[1483]: time="2025-03-20T21:20:50.078083740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 20 21:20:50.079429 containerd[1483]: time="2025-03-20T21:20:50.079399624Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:50.082040 containerd[1483]: time="2025-03-20T21:20:50.081968394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:50.082470 containerd[1483]: time="2025-03-20T21:20:50.082433215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.574121421s" Mar 20 21:20:50.082470 containerd[1483]: time="2025-03-20T21:20:50.082467290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 20 21:20:50.083695 containerd[1483]: time="2025-03-20T21:20:50.083663058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 20 21:20:50.084979 containerd[1483]: time="2025-03-20T21:20:50.084852383Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 20 21:20:50.097644 containerd[1483]: time="2025-03-20T21:20:50.097403749Z" level=info msg="Container 62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:50.109316 containerd[1483]: time="2025-03-20T21:20:50.109245530Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\"" Mar 20 21:20:50.109906 containerd[1483]: time="2025-03-20T21:20:50.109869414Z" level=info msg="StartContainer for \"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\"" Mar 20 21:20:50.111895 containerd[1483]: time="2025-03-20T21:20:50.111857883Z" level=info msg="connecting to shim 62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189" address="unix:///run/containerd/s/05cb1025642afeb2f882416624fd3de7d3655c28c3c98b40f6568c5fcd48e863" protocol=ttrpc version=3 Mar 20 21:20:50.143759 systemd[1]: Started cri-containerd-62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189.scope - libcontainer container 62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189. Mar 20 21:20:50.207733 systemd[1]: cri-containerd-62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189.scope: Deactivated successfully. Mar 20 21:20:50.208218 systemd[1]: cri-containerd-62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189.scope: Consumed 46ms CPU time, 8.3M memory peak, 4.1M written to disk. Mar 20 21:20:50.210251 containerd[1483]: time="2025-03-20T21:20:50.210207859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\" id:\"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\" pid:3160 exited_at:{seconds:1742505650 nanos:209643820}" Mar 20 21:20:50.212090 containerd[1483]: time="2025-03-20T21:20:50.211752618Z" level=info msg="received exit event container_id:\"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\" id:\"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\" pid:3160 exited_at:{seconds:1742505650 nanos:209643820}" Mar 20 21:20:50.214275 containerd[1483]: time="2025-03-20T21:20:50.214176022Z" level=info msg="StartContainer for \"62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189\" returns successfully" Mar 20 21:20:50.239120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62d83ce00f190baca03462b5f93bfba346394cd3abc1930ad935e4f8dbb42189-rootfs.mount: Deactivated successfully. Mar 20 21:20:50.369234 kubelet[2587]: E0320 21:20:50.369140 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:50.419959 kubelet[2587]: E0320 21:20:50.419917 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:51.341910 update_engine[1465]: I20250320 21:20:51.341785 1465 update_attempter.cc:509] Updating boot flags... Mar 20 21:20:51.375671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3206) Mar 20 21:20:51.426007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3205) Mar 20 21:20:51.465801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3205) Mar 20 21:20:52.370635 kubelet[2587]: E0320 21:20:52.370556 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:53.254853 containerd[1483]: time="2025-03-20T21:20:53.254789154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:53.255643 containerd[1483]: time="2025-03-20T21:20:53.255567998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 20 21:20:53.257245 containerd[1483]: time="2025-03-20T21:20:53.257174257Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:53.263020 containerd[1483]: time="2025-03-20T21:20:53.262948455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:53.263888 containerd[1483]: time="2025-03-20T21:20:53.263835984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.180136818s" Mar 20 21:20:53.263888 containerd[1483]: time="2025-03-20T21:20:53.263875890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 20 21:20:53.265405 containerd[1483]: time="2025-03-20T21:20:53.265155632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 20 21:20:53.276076 containerd[1483]: time="2025-03-20T21:20:53.276014652Z" level=info msg="CreateContainer within sandbox \"071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 20 21:20:53.287467 containerd[1483]: time="2025-03-20T21:20:53.287373297Z" level=info msg="Container 3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:53.297385 containerd[1483]: time="2025-03-20T21:20:53.297313027Z" level=info msg="CreateContainer within sandbox \"071e278a016c9b45480917b1a796f5ce074924c244bff2929996ef546048d041\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511\"" Mar 20 21:20:53.297953 containerd[1483]: time="2025-03-20T21:20:53.297923863Z" level=info msg="StartContainer for \"3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511\"" Mar 20 21:20:53.299245 containerd[1483]: time="2025-03-20T21:20:53.299219876Z" level=info msg="connecting to shim 3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511" address="unix:///run/containerd/s/dc94227da14924bf59cf5f0d559f841268ee10026286dc9c0afcbef4a945c221" protocol=ttrpc version=3 Mar 20 21:20:53.320906 systemd[1]: Started cri-containerd-3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511.scope - libcontainer container 3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511. Mar 20 21:20:53.379873 containerd[1483]: time="2025-03-20T21:20:53.379810028Z" level=info msg="StartContainer for \"3a02744e1cc68c0245870ba893eccb4bc977a1020ef80661df268f6394396511\" returns successfully" Mar 20 21:20:53.428572 kubelet[2587]: E0320 21:20:53.428529 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:53.442815 kubelet[2587]: I0320 21:20:53.442726 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c5496894-hwcwh" podStartSLOduration=1.697072513 podStartE2EDuration="6.442704122s" podCreationTimestamp="2025-03-20 21:20:47 +0000 UTC" firstStartedPulling="2025-03-20 21:20:48.519322942 +0000 UTC m=+12.279630161" lastFinishedPulling="2025-03-20 21:20:53.264954541 +0000 UTC m=+17.025261770" observedRunningTime="2025-03-20 21:20:53.44237586 +0000 UTC m=+17.202683069" watchObservedRunningTime="2025-03-20 21:20:53.442704122 +0000 UTC m=+17.203011341" Mar 20 21:20:54.369911 kubelet[2587]: E0320 21:20:54.369850 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:54.430142 kubelet[2587]: I0320 21:20:54.430092 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:20:54.430749 kubelet[2587]: E0320 21:20:54.430553 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:56.368979 kubelet[2587]: E0320 21:20:56.368887 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:57.996772 containerd[1483]: time="2025-03-20T21:20:57.996697614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:57.998207 containerd[1483]: time="2025-03-20T21:20:57.998147482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 20 21:20:57.999502 containerd[1483]: time="2025-03-20T21:20:57.999460993Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:58.002176 containerd[1483]: time="2025-03-20T21:20:58.002126295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:20:58.002695 containerd[1483]: time="2025-03-20T21:20:58.002655705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.737459818s" Mar 20 21:20:58.002695 containerd[1483]: time="2025-03-20T21:20:58.002693366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 20 21:20:58.022836 containerd[1483]: time="2025-03-20T21:20:58.022767675Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 20 21:20:58.034072 containerd[1483]: time="2025-03-20T21:20:58.033170004Z" level=info msg="Container 9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:20:58.044799 containerd[1483]: time="2025-03-20T21:20:58.044734247Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\"" Mar 20 21:20:58.045428 containerd[1483]: time="2025-03-20T21:20:58.045389022Z" level=info msg="StartContainer for \"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\"" Mar 20 21:20:58.046957 containerd[1483]: time="2025-03-20T21:20:58.046919683Z" level=info msg="connecting to shim 9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a" address="unix:///run/containerd/s/05cb1025642afeb2f882416624fd3de7d3655c28c3c98b40f6568c5fcd48e863" protocol=ttrpc version=3 Mar 20 21:20:58.073961 systemd[1]: Started cri-containerd-9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a.scope - libcontainer container 9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a. Mar 20 21:20:58.132339 containerd[1483]: time="2025-03-20T21:20:58.132273784Z" level=info msg="StartContainer for \"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\" returns successfully" Mar 20 21:20:58.499113 kubelet[2587]: E0320 21:20:58.499056 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:20:58.500658 kubelet[2587]: E0320 21:20:58.499760 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:59.279367 systemd[1]: cri-containerd-9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a.scope: Deactivated successfully. Mar 20 21:20:59.279863 systemd[1]: cri-containerd-9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a.scope: Consumed 596ms CPU time, 163.3M memory peak, 4K read from disk, 154M written to disk. Mar 20 21:20:59.280608 containerd[1483]: time="2025-03-20T21:20:59.280555188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\" id:\"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\" pid:3274 exited_at:{seconds:1742505659 nanos:279870936}" Mar 20 21:20:59.280991 containerd[1483]: time="2025-03-20T21:20:59.280663983Z" level=info msg="received exit event container_id:\"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\" id:\"9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a\" pid:3274 exited_at:{seconds:1742505659 nanos:279870936}" Mar 20 21:20:59.296131 kubelet[2587]: I0320 21:20:59.296090 2587 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 21:20:59.309144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ed5ed289d73d27453ae59fd29d4301acefc70e7e07c460afb378451ee47133a-rootfs.mount: Deactivated successfully. Mar 20 21:20:59.341265 systemd[1]: Created slice kubepods-besteffort-podcbd026c9_21f3_43aa_9908_001b277aa4ec.slice - libcontainer container kubepods-besteffort-podcbd026c9_21f3_43aa_9908_001b277aa4ec.slice. Mar 20 21:20:59.399461 systemd[1]: Created slice kubepods-burstable-podacfb057e_cf8f_411c_bab0_67133e6f15d6.slice - libcontainer container kubepods-burstable-podacfb057e_cf8f_411c_bab0_67133e6f15d6.slice. Mar 20 21:20:59.404906 systemd[1]: Created slice kubepods-burstable-poddd71d79f_0f27_4158_ad35_de2a85987791.slice - libcontainer container kubepods-burstable-poddd71d79f_0f27_4158_ad35_de2a85987791.slice. Mar 20 21:20:59.411308 systemd[1]: Created slice kubepods-besteffort-pode8cfa4d1_6131_408c_bca9_b99228f8f3b1.slice - libcontainer container kubepods-besteffort-pode8cfa4d1_6131_408c_bca9_b99228f8f3b1.slice. Mar 20 21:20:59.417004 systemd[1]: Created slice kubepods-besteffort-podedcbbdfb_9946_4648_8738_e45add8926bf.slice - libcontainer container kubepods-besteffort-podedcbbdfb_9946_4648_8738_e45add8926bf.slice. Mar 20 21:20:59.500255 kubelet[2587]: E0320 21:20:59.500205 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:59.520966 kubelet[2587]: I0320 21:20:59.520890 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn2s\" (UniqueName: \"kubernetes.io/projected/cbd026c9-21f3-43aa-9908-001b277aa4ec-kube-api-access-bwn2s\") pod \"calico-apiserver-665c6f8fcf-hvdfk\" (UID: \"cbd026c9-21f3-43aa-9908-001b277aa4ec\") " pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" Mar 20 21:20:59.520966 kubelet[2587]: I0320 21:20:59.520943 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edcbbdfb-9946-4648-8738-e45add8926bf-tigera-ca-bundle\") pod \"calico-kube-controllers-cf6b68695-xfrlz\" (UID: \"edcbbdfb-9946-4648-8738-e45add8926bf\") " pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" Mar 20 21:20:59.520966 kubelet[2587]: I0320 21:20:59.520969 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd71d79f-0f27-4158-ad35-de2a85987791-config-volume\") pod \"coredns-6f6b679f8f-4vn7t\" (UID: \"dd71d79f-0f27-4158-ad35-de2a85987791\") " pod="kube-system/coredns-6f6b679f8f-4vn7t" Mar 20 21:20:59.521206 kubelet[2587]: I0320 21:20:59.520999 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cbd026c9-21f3-43aa-9908-001b277aa4ec-calico-apiserver-certs\") pod \"calico-apiserver-665c6f8fcf-hvdfk\" (UID: \"cbd026c9-21f3-43aa-9908-001b277aa4ec\") " pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" Mar 20 21:20:59.521206 kubelet[2587]: I0320 21:20:59.521021 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnz6w\" (UniqueName: \"kubernetes.io/projected/dd71d79f-0f27-4158-ad35-de2a85987791-kube-api-access-wnz6w\") pod \"coredns-6f6b679f8f-4vn7t\" (UID: \"dd71d79f-0f27-4158-ad35-de2a85987791\") " pod="kube-system/coredns-6f6b679f8f-4vn7t" Mar 20 21:20:59.521206 kubelet[2587]: I0320 21:20:59.521047 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr992\" (UniqueName: \"kubernetes.io/projected/edcbbdfb-9946-4648-8738-e45add8926bf-kube-api-access-gr992\") pod \"calico-kube-controllers-cf6b68695-xfrlz\" (UID: \"edcbbdfb-9946-4648-8738-e45add8926bf\") " pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" Mar 20 21:20:59.521206 kubelet[2587]: I0320 21:20:59.521075 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pz4\" (UniqueName: \"kubernetes.io/projected/e8cfa4d1-6131-408c-bca9-b99228f8f3b1-kube-api-access-h5pz4\") pod \"calico-apiserver-665c6f8fcf-djwgd\" (UID: \"e8cfa4d1-6131-408c-bca9-b99228f8f3b1\") " pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" Mar 20 21:20:59.521206 kubelet[2587]: I0320 21:20:59.521098 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acfb057e-cf8f-411c-bab0-67133e6f15d6-config-volume\") pod \"coredns-6f6b679f8f-t7rtl\" (UID: \"acfb057e-cf8f-411c-bab0-67133e6f15d6\") " pod="kube-system/coredns-6f6b679f8f-t7rtl" Mar 20 21:20:59.521373 kubelet[2587]: I0320 21:20:59.521152 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8cfa4d1-6131-408c-bca9-b99228f8f3b1-calico-apiserver-certs\") pod \"calico-apiserver-665c6f8fcf-djwgd\" (UID: \"e8cfa4d1-6131-408c-bca9-b99228f8f3b1\") " pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" Mar 20 21:20:59.521373 kubelet[2587]: I0320 21:20:59.521175 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfpnr\" (UniqueName: \"kubernetes.io/projected/acfb057e-cf8f-411c-bab0-67133e6f15d6-kube-api-access-bfpnr\") pod \"coredns-6f6b679f8f-t7rtl\" (UID: \"acfb057e-cf8f-411c-bab0-67133e6f15d6\") " pod="kube-system/coredns-6f6b679f8f-t7rtl" Mar 20 21:20:59.645027 containerd[1483]: time="2025-03-20T21:20:59.644892590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-hvdfk,Uid:cbd026c9-21f3-43aa-9908-001b277aa4ec,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:20:59.704280 kubelet[2587]: E0320 21:20:59.703860 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:59.704645 containerd[1483]: time="2025-03-20T21:20:59.704574081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7rtl,Uid:acfb057e-cf8f-411c-bab0-67133e6f15d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:59.708895 kubelet[2587]: E0320 21:20:59.708835 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:20:59.710477 containerd[1483]: time="2025-03-20T21:20:59.709988408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4vn7t,Uid:dd71d79f-0f27-4158-ad35-de2a85987791,Namespace:kube-system,Attempt:0,}" Mar 20 21:20:59.717809 containerd[1483]: time="2025-03-20T21:20:59.717756368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-djwgd,Uid:e8cfa4d1-6131-408c-bca9-b99228f8f3b1,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:20:59.718089 containerd[1483]: time="2025-03-20T21:20:59.717815189Z" level=error msg="Failed to destroy network for sandbox \"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.720226 containerd[1483]: time="2025-03-20T21:20:59.720187156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf6b68695-xfrlz,Uid:edcbbdfb-9946-4648-8738-e45add8926bf,Namespace:calico-system,Attempt:0,}" Mar 20 21:20:59.727401 containerd[1483]: time="2025-03-20T21:20:59.727339202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-hvdfk,Uid:cbd026c9-21f3-43aa-9908-001b277aa4ec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.729589 kubelet[2587]: E0320 21:20:59.729404 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.729589 kubelet[2587]: E0320 21:20:59.729539 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" Mar 20 21:20:59.729589 kubelet[2587]: E0320 21:20:59.729571 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" Mar 20 21:20:59.729813 kubelet[2587]: E0320 21:20:59.729746 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-665c6f8fcf-hvdfk_calico-apiserver(cbd026c9-21f3-43aa-9908-001b277aa4ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-665c6f8fcf-hvdfk_calico-apiserver(cbd026c9-21f3-43aa-9908-001b277aa4ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1e0ae6649236be704a021714720a7044d8cf63164434d2a945f465cdc890c34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" podUID="cbd026c9-21f3-43aa-9908-001b277aa4ec" Mar 20 21:20:59.799622 containerd[1483]: time="2025-03-20T21:20:59.797377668Z" level=error msg="Failed to destroy network for sandbox \"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.799622 containerd[1483]: time="2025-03-20T21:20:59.799478503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7rtl,Uid:acfb057e-cf8f-411c-bab0-67133e6f15d6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.800275 containerd[1483]: time="2025-03-20T21:20:59.800221245Z" level=error msg="Failed to destroy network for sandbox \"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.800806 kubelet[2587]: E0320 21:20:59.800755 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.800886 kubelet[2587]: E0320 21:20:59.800836 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t7rtl" Mar 20 21:20:59.800886 kubelet[2587]: E0320 21:20:59.800857 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t7rtl" Mar 20 21:20:59.800969 kubelet[2587]: E0320 21:20:59.800915 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t7rtl_kube-system(acfb057e-cf8f-411c-bab0-67133e6f15d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t7rtl_kube-system(acfb057e-cf8f-411c-bab0-67133e6f15d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b5f8ae3b17ae8e8f9265ac23d34ee912f60761543630a02e8671e0bf7d7b56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t7rtl" podUID="acfb057e-cf8f-411c-bab0-67133e6f15d6" Mar 20 21:20:59.804296 containerd[1483]: time="2025-03-20T21:20:59.804138377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4vn7t,Uid:dd71d79f-0f27-4158-ad35-de2a85987791,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.804472 kubelet[2587]: E0320 21:20:59.804411 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.804538 kubelet[2587]: E0320 21:20:59.804485 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4vn7t" Mar 20 21:20:59.804538 kubelet[2587]: E0320 21:20:59.804510 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4vn7t" Mar 20 21:20:59.804628 kubelet[2587]: E0320 21:20:59.804558 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4vn7t_kube-system(dd71d79f-0f27-4158-ad35-de2a85987791)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4vn7t_kube-system(dd71d79f-0f27-4158-ad35-de2a85987791)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec505135feeec8fffce0b9bb6bf739ea3f687cc5588ecc6118fcf5d2b9041df3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4vn7t" podUID="dd71d79f-0f27-4158-ad35-de2a85987791" Mar 20 21:20:59.813116 containerd[1483]: time="2025-03-20T21:20:59.813048352Z" level=error msg="Failed to destroy network for sandbox \"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.814905 containerd[1483]: time="2025-03-20T21:20:59.814852847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf6b68695-xfrlz,Uid:edcbbdfb-9946-4648-8738-e45add8926bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.815162 kubelet[2587]: E0320 21:20:59.815110 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.815226 kubelet[2587]: E0320 21:20:59.815187 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" Mar 20 21:20:59.815226 kubelet[2587]: E0320 21:20:59.815213 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" Mar 20 21:20:59.815320 kubelet[2587]: E0320 21:20:59.815267 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cf6b68695-xfrlz_calico-system(edcbbdfb-9946-4648-8738-e45add8926bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cf6b68695-xfrlz_calico-system(edcbbdfb-9946-4648-8738-e45add8926bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc10ca867d397c92369ddaaf392ba00387557f2d88f77bcb4aee55d9f83b0729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" podUID="edcbbdfb-9946-4648-8738-e45add8926bf" Mar 20 21:20:59.815648 containerd[1483]: time="2025-03-20T21:20:59.815591411Z" level=error msg="Failed to destroy network for sandbox \"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.817050 containerd[1483]: time="2025-03-20T21:20:59.816975654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-djwgd,Uid:e8cfa4d1-6131-408c-bca9-b99228f8f3b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.817240 kubelet[2587]: E0320 21:20:59.817201 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:20:59.817308 kubelet[2587]: E0320 21:20:59.817252 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" Mar 20 21:20:59.817308 kubelet[2587]: E0320 21:20:59.817277 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" Mar 20 21:20:59.817398 kubelet[2587]: E0320 21:20:59.817325 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-665c6f8fcf-djwgd_calico-apiserver(e8cfa4d1-6131-408c-bca9-b99228f8f3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-665c6f8fcf-djwgd_calico-apiserver(e8cfa4d1-6131-408c-bca9-b99228f8f3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed18177f14a369cbcf1dec5fd8e50afbb9ec44feb67507898a2abcbebe2d5226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" podUID="e8cfa4d1-6131-408c-bca9-b99228f8f3b1" Mar 20 21:21:00.305553 systemd[1]: run-netns-cni\x2d2208d564\x2d5537\x2dd37c\x2da9c5\x2d33357dd122b7.mount: Deactivated successfully. Mar 20 21:21:00.375929 systemd[1]: Created slice kubepods-besteffort-pod56991647_fb34_46d6_857a_df4e1a226084.slice - libcontainer container kubepods-besteffort-pod56991647_fb34_46d6_857a_df4e1a226084.slice. Mar 20 21:21:00.378785 containerd[1483]: time="2025-03-20T21:21:00.378742489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrs62,Uid:56991647-fb34-46d6-857a-df4e1a226084,Namespace:calico-system,Attempt:0,}" Mar 20 21:21:00.434997 containerd[1483]: time="2025-03-20T21:21:00.434939514Z" level=error msg="Failed to destroy network for sandbox \"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:21:00.436454 containerd[1483]: time="2025-03-20T21:21:00.436411181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrs62,Uid:56991647-fb34-46d6-857a-df4e1a226084,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:21:00.436723 kubelet[2587]: E0320 21:21:00.436674 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:21:00.436814 kubelet[2587]: E0320 21:21:00.436738 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrs62" Mar 20 21:21:00.436814 kubelet[2587]: E0320 21:21:00.436762 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrs62" Mar 20 21:21:00.436900 kubelet[2587]: E0320 21:21:00.436816 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rrs62_calico-system(56991647-fb34-46d6-857a-df4e1a226084)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rrs62_calico-system(56991647-fb34-46d6-857a-df4e1a226084)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e8106c011c34e274bd55778f8ad457b84a92f662471da06ab45845c1322143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrs62" podUID="56991647-fb34-46d6-857a-df4e1a226084" Mar 20 21:21:00.437871 systemd[1]: run-netns-cni\x2dbbd386d6\x2dbacd\x2d4954\x2d829d\x2dd90505e415f4.mount: Deactivated successfully. Mar 20 21:21:00.505519 kubelet[2587]: E0320 21:21:00.505482 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:00.506431 containerd[1483]: time="2025-03-20T21:21:00.506357303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 20 21:21:03.567592 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:35394.service - OpenSSH per-connection server daemon (10.0.0.1:35394). Mar 20 21:21:03.631761 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 35394 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:03.633672 sshd-session[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:03.639922 systemd-logind[1460]: New session 8 of user core. Mar 20 21:21:03.649007 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:21:03.805882 sshd[3533]: Connection closed by 10.0.0.1 port 35394 Mar 20 21:21:03.806235 sshd-session[3531]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:03.810734 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:35394.service: Deactivated successfully. Mar 20 21:21:03.813166 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:21:03.813997 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:21:03.815095 systemd-logind[1460]: Removed session 8. Mar 20 21:21:04.451242 kubelet[2587]: I0320 21:21:04.450218 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:21:04.451242 kubelet[2587]: E0320 21:21:04.450636 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:04.513939 kubelet[2587]: E0320 21:21:04.513899 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:05.924736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703791693.mount: Deactivated successfully. Mar 20 21:21:06.267873 containerd[1483]: time="2025-03-20T21:21:06.267695734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:06.268516 containerd[1483]: time="2025-03-20T21:21:06.268434765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 20 21:21:06.269712 containerd[1483]: time="2025-03-20T21:21:06.269672747Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:06.271523 containerd[1483]: time="2025-03-20T21:21:06.271476924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:06.272063 containerd[1483]: time="2025-03-20T21:21:06.272017372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 5.765621646s" Mar 20 21:21:06.272063 containerd[1483]: time="2025-03-20T21:21:06.272055804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 20 21:21:06.281574 containerd[1483]: time="2025-03-20T21:21:06.281512552Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 20 21:21:06.310592 containerd[1483]: time="2025-03-20T21:21:06.310533353Z" level=info msg="Container 2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:06.325739 containerd[1483]: time="2025-03-20T21:21:06.325670458Z" level=info msg="CreateContainer within sandbox \"4baae2a8a3e621274f2ab13da53d907072c07aab71c7065bb6d3fdde320ea109\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\"" Mar 20 21:21:06.326227 containerd[1483]: time="2025-03-20T21:21:06.326183454Z" level=info msg="StartContainer for \"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\"" Mar 20 21:21:06.327713 containerd[1483]: time="2025-03-20T21:21:06.327682257Z" level=info msg="connecting to shim 2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808" address="unix:///run/containerd/s/05cb1025642afeb2f882416624fd3de7d3655c28c3c98b40f6568c5fcd48e863" protocol=ttrpc version=3 Mar 20 21:21:06.351800 systemd[1]: Started cri-containerd-2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808.scope - libcontainer container 2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808. Mar 20 21:21:06.464647 containerd[1483]: time="2025-03-20T21:21:06.464568764Z" level=info msg="StartContainer for \"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\" returns successfully" Mar 20 21:21:06.499787 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 20 21:21:06.499922 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 20 21:21:06.522198 kubelet[2587]: E0320 21:21:06.522021 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:06.535925 kubelet[2587]: I0320 21:21:06.535836 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hsg2l" podStartSLOduration=1.7710476389999998 podStartE2EDuration="19.535813358s" podCreationTimestamp="2025-03-20 21:20:47 +0000 UTC" firstStartedPulling="2025-03-20 21:20:48.507933377 +0000 UTC m=+12.268240586" lastFinishedPulling="2025-03-20 21:21:06.272699096 +0000 UTC m=+30.033006305" observedRunningTime="2025-03-20 21:21:06.534726672 +0000 UTC m=+30.295033881" watchObservedRunningTime="2025-03-20 21:21:06.535813358 +0000 UTC m=+30.296120567" Mar 20 21:21:06.622667 containerd[1483]: time="2025-03-20T21:21:06.621805133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\" id:\"77aa7a72a01537505e15bb33341ce723b8d87e5bf4c0f37699922b1869e2d693\" pid:3610 exit_status:1 exited_at:{seconds:1742505666 nanos:621287669}" Mar 20 21:21:07.523663 kubelet[2587]: E0320 21:21:07.523571 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:07.590187 containerd[1483]: time="2025-03-20T21:21:07.590124408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\" id:\"4a6820c83a038473ed6cd5ec04a71983a914e7972d7fe514fe1a3ff9acf6d364\" pid:3655 exit_status:1 exited_at:{seconds:1742505667 nanos:589680504}" Mar 20 21:21:08.003635 kernel: bpftool[3777]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 20 21:21:08.253043 systemd-networkd[1412]: vxlan.calico: Link UP Mar 20 21:21:08.253056 systemd-networkd[1412]: vxlan.calico: Gained carrier Mar 20 21:21:08.821259 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). Mar 20 21:21:08.881383 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:08.883239 sshd-session[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:08.888038 systemd-logind[1460]: New session 9 of user core. Mar 20 21:21:08.897760 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:21:09.038034 sshd[3871]: Connection closed by 10.0.0.1 port 35402 Mar 20 21:21:09.038396 sshd-session[3867]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:09.042618 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:35402.service: Deactivated successfully. Mar 20 21:21:09.045124 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:21:09.045942 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:21:09.046989 systemd-logind[1460]: Removed session 9. Mar 20 21:21:09.497779 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Mar 20 21:21:10.368801 kubelet[2587]: E0320 21:21:10.368719 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:10.369586 containerd[1483]: time="2025-03-20T21:21:10.369471648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7rtl,Uid:acfb057e-cf8f-411c-bab0-67133e6f15d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:21:10.545808 systemd-networkd[1412]: cali06c7c1218fc: Link UP Mar 20 21:21:10.546907 systemd-networkd[1412]: cali06c7c1218fc: Gained carrier Mar 20 21:21:10.562168 containerd[1483]: 2025-03-20 21:21:10.437 [INFO][3888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0 coredns-6f6b679f8f- kube-system acfb057e-cf8f-411c-bab0-67133e6f15d6 691 0 2025-03-20 21:20:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-t7rtl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06c7c1218fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-" Mar 20 21:21:10.562168 containerd[1483]: 2025-03-20 21:21:10.438 [INFO][3888] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.562168 containerd[1483]: 2025-03-20 21:21:10.504 [INFO][3902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" HandleID="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Workload="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.514 [INFO][3902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" HandleID="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Workload="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000516c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-t7rtl", "timestamp":"2025-03-20 21:21:10.504929136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.515 [INFO][3902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.515 [INFO][3902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.515 [INFO][3902] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.517 [INFO][3902] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" host="localhost" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.523 [INFO][3902] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.527 [INFO][3902] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.528 [INFO][3902] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.530 [INFO][3902] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:10.562506 containerd[1483]: 2025-03-20 21:21:10.530 [INFO][3902] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" host="localhost" Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.532 [INFO][3902] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771 Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.535 [INFO][3902] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" host="localhost" Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.539 [INFO][3902] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" host="localhost" Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.539 [INFO][3902] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" host="localhost" Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.539 [INFO][3902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:10.563101 containerd[1483]: 2025-03-20 21:21:10.539 [INFO][3902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" HandleID="k8s-pod-network.2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Workload="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.563294 containerd[1483]: 2025-03-20 21:21:10.542 [INFO][3888] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"acfb057e-cf8f-411c-bab0-67133e6f15d6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-t7rtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06c7c1218fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:10.563389 containerd[1483]: 2025-03-20 21:21:10.543 [INFO][3888] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.563389 containerd[1483]: 2025-03-20 21:21:10.543 [INFO][3888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06c7c1218fc ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.563389 containerd[1483]: 2025-03-20 21:21:10.547 [INFO][3888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.563494 containerd[1483]: 2025-03-20 21:21:10.547 [INFO][3888] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"acfb057e-cf8f-411c-bab0-67133e6f15d6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771", Pod:"coredns-6f6b679f8f-t7rtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06c7c1218fc", MAC:"c2:e1:e4:e8:39:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:10.563494 containerd[1483]: 2025-03-20 21:21:10.556 [INFO][3888] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7rtl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7rtl-eth0" Mar 20 21:21:10.715720 containerd[1483]: time="2025-03-20T21:21:10.715646810Z" level=info msg="connecting to shim 2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771" address="unix:///run/containerd/s/39d38b493d970a79bd02665408ee099b6281652664e484fa677e3103024fe5bb" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:10.775895 systemd[1]: Started cri-containerd-2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771.scope - libcontainer container 2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771. Mar 20 21:21:10.790587 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:10.828935 containerd[1483]: time="2025-03-20T21:21:10.828875122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7rtl,Uid:acfb057e-cf8f-411c-bab0-67133e6f15d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771\"" Mar 20 21:21:10.829813 kubelet[2587]: E0320 21:21:10.829768 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:10.831951 containerd[1483]: time="2025-03-20T21:21:10.831924610Z" level=info msg="CreateContainer within sandbox \"2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:21:10.848908 containerd[1483]: time="2025-03-20T21:21:10.848837803Z" level=info msg="Container 2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:10.849164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982537717.mount: Deactivated successfully. Mar 20 21:21:10.855826 containerd[1483]: time="2025-03-20T21:21:10.855772852Z" level=info msg="CreateContainer within sandbox \"2693b4563e1a8aa8d3b8a650ce9079369f838572f6458bfa68bb52b8a141e771\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49\"" Mar 20 21:21:10.856422 containerd[1483]: time="2025-03-20T21:21:10.856387217Z" level=info msg="StartContainer for \"2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49\"" Mar 20 21:21:10.857418 containerd[1483]: time="2025-03-20T21:21:10.857379625Z" level=info msg="connecting to shim 2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49" address="unix:///run/containerd/s/39d38b493d970a79bd02665408ee099b6281652664e484fa677e3103024fe5bb" protocol=ttrpc version=3 Mar 20 21:21:10.881840 systemd[1]: Started cri-containerd-2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49.scope - libcontainer container 2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49. Mar 20 21:21:10.964669 containerd[1483]: time="2025-03-20T21:21:10.964622646Z" level=info msg="StartContainer for \"2ab285298d495c7a3b55bbb8b1485761f36283658cc99ed11f944054ef53be49\" returns successfully" Mar 20 21:21:11.369672 containerd[1483]: time="2025-03-20T21:21:11.369594750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-djwgd,Uid:e8cfa4d1-6131-408c-bca9-b99228f8f3b1,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:21:11.369672 containerd[1483]: time="2025-03-20T21:21:11.369619476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrs62,Uid:56991647-fb34-46d6-857a-df4e1a226084,Namespace:calico-system,Attempt:0,}" Mar 20 21:21:11.370163 containerd[1483]: time="2025-03-20T21:21:11.369612874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-hvdfk,Uid:cbd026c9-21f3-43aa-9908-001b277aa4ec,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:21:11.503557 systemd-networkd[1412]: cali865a2fbee4d: Link UP Mar 20 21:21:11.503889 systemd-networkd[1412]: cali865a2fbee4d: Gained carrier Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.422 [INFO][4009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0 calico-apiserver-665c6f8fcf- calico-apiserver e8cfa4d1-6131-408c-bca9-b99228f8f3b1 694 0 2025-03-20 21:20:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:665c6f8fcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-665c6f8fcf-djwgd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali865a2fbee4d [] []}} ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.422 [INFO][4009] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.459 [INFO][4052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" HandleID="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" HandleID="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-665c6f8fcf-djwgd", "timestamp":"2025-03-20 21:21:11.459928976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4052] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.472 [INFO][4052] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.476 [INFO][4052] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.480 [INFO][4052] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.482 [INFO][4052] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.485 [INFO][4052] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.486 [INFO][4052] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.487 [INFO][4052] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818 Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.491 [INFO][4052] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.497 [INFO][4052] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.497 [INFO][4052] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" host="localhost" Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.497 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:11.519627 containerd[1483]: 2025-03-20 21:21:11.497 [INFO][4052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" HandleID="k8s-pod-network.a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.500 [INFO][4009] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0", GenerateName:"calico-apiserver-665c6f8fcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cfa4d1-6131-408c-bca9-b99228f8f3b1", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665c6f8fcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-665c6f8fcf-djwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali865a2fbee4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.500 [INFO][4009] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.500 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali865a2fbee4d ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.504 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.504 [INFO][4009] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0", GenerateName:"calico-apiserver-665c6f8fcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cfa4d1-6131-408c-bca9-b99228f8f3b1", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665c6f8fcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818", Pod:"calico-apiserver-665c6f8fcf-djwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali865a2fbee4d", MAC:"4e:ad:99:1f:dd:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.520200 containerd[1483]: 2025-03-20 21:21:11.516 [INFO][4009] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-djwgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--djwgd-eth0" Mar 20 21:21:11.531640 kubelet[2587]: E0320 21:21:11.531587 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:11.549174 kubelet[2587]: I0320 21:21:11.549004 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t7rtl" podStartSLOduration=29.548983445 podStartE2EDuration="29.548983445s" podCreationTimestamp="2025-03-20 21:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:21:11.545835994 +0000 UTC m=+35.306143203" watchObservedRunningTime="2025-03-20 21:21:11.548983445 +0000 UTC m=+35.309290654" Mar 20 21:21:11.572146 containerd[1483]: time="2025-03-20T21:21:11.572077159Z" level=info msg="connecting to shim a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818" address="unix:///run/containerd/s/d08bd80d5714c5e4134c3b0fdd4da7e6fe5f5133e684fa08eb6487a9cbfbbdd2" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:11.611224 systemd-networkd[1412]: cali8d003000629: Link UP Mar 20 21:21:11.611997 systemd-networkd[1412]: cali8d003000629: Gained carrier Mar 20 21:21:11.613029 systemd[1]: Started cri-containerd-a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818.scope - libcontainer container a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818. Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.423 [INFO][4023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0 calico-apiserver-665c6f8fcf- calico-apiserver cbd026c9-21f3-43aa-9908-001b277aa4ec 687 0 2025-03-20 21:20:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:665c6f8fcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-665c6f8fcf-hvdfk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8d003000629 [] []}} ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.423 [INFO][4023] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.459 [INFO][4050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" HandleID="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" HandleID="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-665c6f8fcf-hvdfk", "timestamp":"2025-03-20 21:21:11.459278723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.498 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.498 [INFO][4050] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.573 [INFO][4050] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.578 [INFO][4050] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.581 [INFO][4050] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.583 [INFO][4050] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.585 [INFO][4050] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.585 [INFO][4050] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.586 [INFO][4050] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.593 [INFO][4050] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.599 [INFO][4050] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.599 [INFO][4050] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" host="localhost" Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.599 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:11.629864 containerd[1483]: 2025-03-20 21:21:11.599 [INFO][4050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" HandleID="k8s-pod-network.2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Workload="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.604 [INFO][4023] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0", GenerateName:"calico-apiserver-665c6f8fcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbd026c9-21f3-43aa-9908-001b277aa4ec", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665c6f8fcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-665c6f8fcf-hvdfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d003000629", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.604 [INFO][4023] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.604 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d003000629 ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.612 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.612 [INFO][4023] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0", GenerateName:"calico-apiserver-665c6f8fcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"cbd026c9-21f3-43aa-9908-001b277aa4ec", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665c6f8fcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa", Pod:"calico-apiserver-665c6f8fcf-hvdfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d003000629", MAC:"9a:97:3e:4f:56:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.630898 containerd[1483]: 2025-03-20 21:21:11.625 [INFO][4023] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" Namespace="calico-apiserver" Pod="calico-apiserver-665c6f8fcf-hvdfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--665c6f8fcf--hvdfk-eth0" Mar 20 21:21:11.647115 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:11.663465 containerd[1483]: time="2025-03-20T21:21:11.663396118Z" level=info msg="connecting to shim 2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa" address="unix:///run/containerd/s/b892168160eb1a0b272813e7acdb76637b89f28a5f027d049c71309905b925f6" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:11.697856 systemd[1]: Started cri-containerd-2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa.scope - libcontainer container 2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa. Mar 20 21:21:11.702385 containerd[1483]: time="2025-03-20T21:21:11.702321143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-djwgd,Uid:e8cfa4d1-6131-408c-bca9-b99228f8f3b1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818\"" Mar 20 21:21:11.704793 containerd[1483]: time="2025-03-20T21:21:11.704727259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 21:21:11.719071 systemd-networkd[1412]: calib4755857355: Link UP Mar 20 21:21:11.719935 systemd-networkd[1412]: calib4755857355: Gained carrier Mar 20 21:21:11.720869 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.422 [INFO][4004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rrs62-eth0 csi-node-driver- calico-system 56991647-fb34-46d6-857a-df4e1a226084 597 0 2025-03-20 21:20:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rrs62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib4755857355 [] []}} ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.422 [INFO][4004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.462 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" HandleID="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Workload="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.470 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" HandleID="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Workload="localhost-k8s-csi--node--driver--rrs62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rrs62", "timestamp":"2025-03-20 21:21:11.462892281 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.471 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.600 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.600 [INFO][4048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.675 [INFO][4048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.681 [INFO][4048] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.686 [INFO][4048] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.688 [INFO][4048] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.689 [INFO][4048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.690 [INFO][4048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.692 [INFO][4048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622 Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.696 [INFO][4048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.706 [INFO][4048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.707 [INFO][4048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" host="localhost" Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.707 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:11.736419 containerd[1483]: 2025-03-20 21:21:11.707 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" HandleID="k8s-pod-network.28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Workload="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.713 [INFO][4004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrs62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56991647-fb34-46d6-857a-df4e1a226084", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rrs62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4755857355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.714 [INFO][4004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.715 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4755857355 ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.718 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.719 [INFO][4004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrs62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56991647-fb34-46d6-857a-df4e1a226084", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622", Pod:"csi-node-driver-rrs62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4755857355", MAC:"86:08:dc:9e:bc:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:11.737141 containerd[1483]: 2025-03-20 21:21:11.731 [INFO][4004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" Namespace="calico-system" Pod="csi-node-driver-rrs62" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrs62-eth0" Mar 20 21:21:11.766057 containerd[1483]: time="2025-03-20T21:21:11.765937057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665c6f8fcf-hvdfk,Uid:cbd026c9-21f3-43aa-9908-001b277aa4ec,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa\"" Mar 20 21:21:11.774797 containerd[1483]: time="2025-03-20T21:21:11.774733274Z" level=info msg="connecting to shim 28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622" address="unix:///run/containerd/s/7915da30cfad3eb2e9471c4c71688cfffcf399f517825dabfdba799bcd096c74" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:11.802816 systemd[1]: Started cri-containerd-28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622.scope - libcontainer container 28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622. Mar 20 21:21:11.818292 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:11.832659 containerd[1483]: time="2025-03-20T21:21:11.832613598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrs62,Uid:56991647-fb34-46d6-857a-df4e1a226084,Namespace:calico-system,Attempt:0,} returns sandbox id \"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622\"" Mar 20 21:21:12.369162 kubelet[2587]: E0320 21:21:12.369082 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:12.369543 containerd[1483]: time="2025-03-20T21:21:12.369482055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf6b68695-xfrlz,Uid:edcbbdfb-9946-4648-8738-e45add8926bf,Namespace:calico-system,Attempt:0,}" Mar 20 21:21:12.370203 containerd[1483]: time="2025-03-20T21:21:12.369801215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4vn7t,Uid:dd71d79f-0f27-4158-ad35-de2a85987791,Namespace:kube-system,Attempt:0,}" Mar 20 21:21:12.442706 systemd-networkd[1412]: cali06c7c1218fc: Gained IPv6LL Mar 20 21:21:12.489027 systemd-networkd[1412]: cali95d9b884f6e: Link UP Mar 20 21:21:12.489717 systemd-networkd[1412]: cali95d9b884f6e: Gained carrier Mar 20 21:21:12.539230 kubelet[2587]: E0320 21:21:12.539175 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.414 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0 calico-kube-controllers-cf6b68695- calico-system edcbbdfb-9946-4648-8738-e45add8926bf 695 0 2025-03-20 21:20:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cf6b68695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cf6b68695-xfrlz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95d9b884f6e [] []}} ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.414 [INFO][4262] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.446 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" HandleID="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Workload="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.459 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" HandleID="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Workload="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362a20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cf6b68695-xfrlz", "timestamp":"2025-03-20 21:21:12.446801403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.459 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.459 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.459 [INFO][4295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.462 [INFO][4295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.466 [INFO][4295] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.470 [INFO][4295] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.471 [INFO][4295] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.473 [INFO][4295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.473 [INFO][4295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.474 [INFO][4295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.478 [INFO][4295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.483 [INFO][4295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.483 [INFO][4295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" host="localhost" Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.483 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:12.540778 containerd[1483]: 2025-03-20 21:21:12.483 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" HandleID="k8s-pod-network.ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Workload="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.486 [INFO][4262] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0", GenerateName:"calico-kube-controllers-cf6b68695-", Namespace:"calico-system", SelfLink:"", UID:"edcbbdfb-9946-4648-8738-e45add8926bf", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf6b68695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cf6b68695-xfrlz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95d9b884f6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.486 [INFO][4262] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.486 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95d9b884f6e ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.488 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.489 [INFO][4262] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0", GenerateName:"calico-kube-controllers-cf6b68695-", Namespace:"calico-system", SelfLink:"", UID:"edcbbdfb-9946-4648-8738-e45add8926bf", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf6b68695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df", Pod:"calico-kube-controllers-cf6b68695-xfrlz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95d9b884f6e", MAC:"ee:9d:02:f9:b4:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:12.542160 containerd[1483]: 2025-03-20 21:21:12.534 [INFO][4262] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" Namespace="calico-system" Pod="calico-kube-controllers-cf6b68695-xfrlz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf6b68695--xfrlz-eth0" Mar 20 21:21:12.632777 systemd-networkd[1412]: cali865a2fbee4d: Gained IPv6LL Mar 20 21:21:12.637774 systemd-networkd[1412]: cali8d9b13110af: Link UP Mar 20 21:21:12.637983 systemd-networkd[1412]: cali8d9b13110af: Gained carrier Mar 20 21:21:12.639931 containerd[1483]: time="2025-03-20T21:21:12.639334934Z" level=info msg="connecting to shim ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df" address="unix:///run/containerd/s/a98db47b71f5de1855810200a5fcf2ab1db8bbcaef710080bc7b2cbe62b22910" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.414 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0 coredns-6f6b679f8f- kube-system dd71d79f-0f27-4158-ad35-de2a85987791 693 0 2025-03-20 21:20:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-4vn7t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d9b13110af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.414 [INFO][4268] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.452 [INFO][4293] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" HandleID="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Workload="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.460 [INFO][4293] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" HandleID="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Workload="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-4vn7t", "timestamp":"2025-03-20 21:21:12.452403169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.460 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.483 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.484 [INFO][4293] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.565 [INFO][4293] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.576 [INFO][4293] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.583 [INFO][4293] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.585 [INFO][4293] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.589 [INFO][4293] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.589 [INFO][4293] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.591 [INFO][4293] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414 Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.599 [INFO][4293] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.629 [INFO][4293] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.629 [INFO][4293] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" host="localhost" Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.629 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:21:12.661649 containerd[1483]: 2025-03-20 21:21:12.629 [INFO][4293] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" HandleID="k8s-pod-network.5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Workload="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.633 [INFO][4268] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd71d79f-0f27-4158-ad35-de2a85987791", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-4vn7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9b13110af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.634 [INFO][4268] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.634 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d9b13110af ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.637 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.637 [INFO][4268] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd71d79f-0f27-4158-ad35-de2a85987791", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414", Pod:"coredns-6f6b679f8f-4vn7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9b13110af", MAC:"0a:1f:30:f2:32:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:21:12.662280 containerd[1483]: 2025-03-20 21:21:12.654 [INFO][4268] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" Namespace="kube-system" Pod="coredns-6f6b679f8f-4vn7t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4vn7t-eth0" Mar 20 21:21:12.676167 systemd[1]: Started cri-containerd-ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df.scope - libcontainer container ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df. Mar 20 21:21:12.694762 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:12.715273 containerd[1483]: time="2025-03-20T21:21:12.715198394Z" level=info msg="connecting to shim 5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414" address="unix:///run/containerd/s/c144edf744d5c0b525627dac935761dbb30f25451f7eb5fdc2d990cbea55e8e8" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:21:12.731289 containerd[1483]: time="2025-03-20T21:21:12.731209529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf6b68695-xfrlz,Uid:edcbbdfb-9946-4648-8738-e45add8926bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df\"" Mar 20 21:21:12.757874 systemd[1]: Started cri-containerd-5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414.scope - libcontainer container 5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414. Mar 20 21:21:12.776079 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:21:12.851347 containerd[1483]: time="2025-03-20T21:21:12.851279906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4vn7t,Uid:dd71d79f-0f27-4158-ad35-de2a85987791,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414\"" Mar 20 21:21:12.852304 kubelet[2587]: E0320 21:21:12.852264 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:12.854783 containerd[1483]: time="2025-03-20T21:21:12.854736307Z" level=info msg="CreateContainer within sandbox \"5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:21:12.886146 containerd[1483]: time="2025-03-20T21:21:12.885967950Z" level=info msg="Container 56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:12.889781 systemd-networkd[1412]: calib4755857355: Gained IPv6LL Mar 20 21:21:12.899079 containerd[1483]: time="2025-03-20T21:21:12.899025790Z" level=info msg="CreateContainer within sandbox \"5b34557cb6a9d6f6f5f8fca32e5396f31345954c588f5fbbaab6c016c8e81414\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc\"" Mar 20 21:21:12.899969 containerd[1483]: time="2025-03-20T21:21:12.899928829Z" level=info msg="StartContainer for \"56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc\"" Mar 20 21:21:12.905543 containerd[1483]: time="2025-03-20T21:21:12.905487744Z" level=info msg="connecting to shim 56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc" address="unix:///run/containerd/s/c144edf744d5c0b525627dac935761dbb30f25451f7eb5fdc2d990cbea55e8e8" protocol=ttrpc version=3 Mar 20 21:21:12.934872 systemd[1]: Started cri-containerd-56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc.scope - libcontainer container 56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc. Mar 20 21:21:12.970035 containerd[1483]: time="2025-03-20T21:21:12.969987220Z" level=info msg="StartContainer for \"56454a0882c63c678fc199e1b7742cb43c941ed70308e991077f1272c4dc81dc\" returns successfully" Mar 20 21:21:13.080875 systemd-networkd[1412]: cali8d003000629: Gained IPv6LL Mar 20 21:21:13.544472 kubelet[2587]: E0320 21:21:13.544409 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:13.544472 kubelet[2587]: E0320 21:21:13.544443 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:13.557192 kubelet[2587]: I0320 21:21:13.557117 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4vn7t" podStartSLOduration=31.557096081 podStartE2EDuration="31.557096081s" podCreationTimestamp="2025-03-20 21:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:21:13.556660001 +0000 UTC m=+37.316967220" watchObservedRunningTime="2025-03-20 21:21:13.557096081 +0000 UTC m=+37.317403290" Mar 20 21:21:14.054225 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Mar 20 21:21:14.127770 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:14.129080 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:14.137949 systemd-logind[1460]: New session 10 of user core. Mar 20 21:21:14.143772 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:21:14.297685 sshd[4477]: Connection closed by 10.0.0.1 port 38950 Mar 20 21:21:14.298048 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:14.308009 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:38950.service: Deactivated successfully. Mar 20 21:21:14.310924 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:21:14.314161 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:21:14.318847 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:38966.service - OpenSSH per-connection server daemon (10.0.0.1:38966). Mar 20 21:21:14.319896 systemd-logind[1460]: Removed session 10. Mar 20 21:21:14.361089 systemd-networkd[1412]: cali95d9b884f6e: Gained IPv6LL Mar 20 21:21:14.362780 systemd-networkd[1412]: cali8d9b13110af: Gained IPv6LL Mar 20 21:21:14.369399 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:14.371117 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:14.376192 systemd-logind[1460]: New session 11 of user core. Mar 20 21:21:14.384738 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:21:14.420410 containerd[1483]: time="2025-03-20T21:21:14.420341330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:14.421069 containerd[1483]: time="2025-03-20T21:21:14.420987304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 20 21:21:14.422089 containerd[1483]: time="2025-03-20T21:21:14.422049121Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:14.423937 containerd[1483]: time="2025-03-20T21:21:14.423899820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:14.424478 containerd[1483]: time="2025-03-20T21:21:14.424439135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.719646584s" Mar 20 21:21:14.424478 containerd[1483]: time="2025-03-20T21:21:14.424472548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 20 21:21:14.425641 containerd[1483]: time="2025-03-20T21:21:14.425553620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 21:21:14.426640 containerd[1483]: time="2025-03-20T21:21:14.426615167Z" level=info msg="CreateContainer within sandbox \"a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:21:14.436173 containerd[1483]: time="2025-03-20T21:21:14.435385207Z" level=info msg="Container 9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:14.453874 containerd[1483]: time="2025-03-20T21:21:14.453812646Z" level=info msg="CreateContainer within sandbox \"a684005be0aff927ad0962b9c1afc28c646ded5ac4eef6027c2c7122a0b31818\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb\"" Mar 20 21:21:14.456259 containerd[1483]: time="2025-03-20T21:21:14.454538460Z" level=info msg="StartContainer for \"9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb\"" Mar 20 21:21:14.456259 containerd[1483]: time="2025-03-20T21:21:14.455669087Z" level=info msg="connecting to shim 9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb" address="unix:///run/containerd/s/d08bd80d5714c5e4134c3b0fdd4da7e6fe5f5133e684fa08eb6487a9cbfbbdd2" protocol=ttrpc version=3 Mar 20 21:21:14.488866 systemd[1]: Started cri-containerd-9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb.scope - libcontainer container 9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb. Mar 20 21:21:14.600749 containerd[1483]: time="2025-03-20T21:21:14.600581344Z" level=info msg="StartContainer for \"9982bb3a4118f22f6d7e95a79395d0f7064ee6e8e1df557fdf2443b36416e4bb\" returns successfully" Mar 20 21:21:14.603459 kubelet[2587]: E0320 21:21:14.603422 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:14.611267 sshd[4495]: Connection closed by 10.0.0.1 port 38966 Mar 20 21:21:14.612379 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:14.624386 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:21:14.628209 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:38972.service - OpenSSH per-connection server daemon (10.0.0.1:38972). Mar 20 21:21:14.631426 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:38966.service: Deactivated successfully. Mar 20 21:21:14.636375 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:21:14.640382 systemd-logind[1460]: Removed session 11. Mar 20 21:21:14.700523 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 38972 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:14.702343 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:14.707870 systemd-logind[1460]: New session 12 of user core. Mar 20 21:21:14.715870 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:21:14.851718 sshd[4550]: Connection closed by 10.0.0.1 port 38972 Mar 20 21:21:14.853767 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:14.857154 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:21:14.857426 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:38972.service: Deactivated successfully. Mar 20 21:21:14.858257 containerd[1483]: time="2025-03-20T21:21:14.858208398Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:14.860713 containerd[1483]: time="2025-03-20T21:21:14.859156130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 20 21:21:14.860565 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:21:14.861430 containerd[1483]: time="2025-03-20T21:21:14.861385172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 435.768162ms" Mar 20 21:21:14.861511 containerd[1483]: time="2025-03-20T21:21:14.861496581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 20 21:21:14.864567 systemd-logind[1460]: Removed session 12. Mar 20 21:21:14.872423 containerd[1483]: time="2025-03-20T21:21:14.872363404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 20 21:21:14.874936 containerd[1483]: time="2025-03-20T21:21:14.874891247Z" level=info msg="CreateContainer within sandbox \"2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:21:14.884777 containerd[1483]: time="2025-03-20T21:21:14.883146499Z" level=info msg="Container e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:14.892424 containerd[1483]: time="2025-03-20T21:21:14.892379710Z" level=info msg="CreateContainer within sandbox \"2f11fb9cda7f28dd7ad862b4573e83ebc0dc82807da6964458dae8b76d7e38fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e\"" Mar 20 21:21:14.893872 containerd[1483]: time="2025-03-20T21:21:14.893566873Z" level=info msg="StartContainer for \"e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e\"" Mar 20 21:21:14.895029 containerd[1483]: time="2025-03-20T21:21:14.895001771Z" level=info msg="connecting to shim e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e" address="unix:///run/containerd/s/b892168160eb1a0b272813e7acdb76637b89f28a5f027d049c71309905b925f6" protocol=ttrpc version=3 Mar 20 21:21:14.925861 systemd[1]: Started cri-containerd-e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e.scope - libcontainer container e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e. Mar 20 21:21:14.980148 containerd[1483]: time="2025-03-20T21:21:14.979899303Z" level=info msg="StartContainer for \"e8e1e4aa8c81e162eb745a6b4fec0f4a19194496545447cac56ec45e4ca6881e\" returns successfully" Mar 20 21:21:15.609420 kubelet[2587]: E0320 21:21:15.609351 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:15.620765 kubelet[2587]: I0320 21:21:15.620694 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-665c6f8fcf-djwgd" podStartSLOduration=25.899267541 podStartE2EDuration="28.620676237s" podCreationTimestamp="2025-03-20 21:20:47 +0000 UTC" firstStartedPulling="2025-03-20 21:21:11.703891758 +0000 UTC m=+35.464198967" lastFinishedPulling="2025-03-20 21:21:14.425300454 +0000 UTC m=+38.185607663" observedRunningTime="2025-03-20 21:21:15.620004996 +0000 UTC m=+39.380312205" watchObservedRunningTime="2025-03-20 21:21:15.620676237 +0000 UTC m=+39.380983446" Mar 20 21:21:15.648962 kubelet[2587]: I0320 21:21:15.648873 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-665c6f8fcf-hvdfk" podStartSLOduration=25.544550799 podStartE2EDuration="28.648854854s" podCreationTimestamp="2025-03-20 21:20:47 +0000 UTC" firstStartedPulling="2025-03-20 21:21:11.767910078 +0000 UTC m=+35.528217277" lastFinishedPulling="2025-03-20 21:21:14.872214123 +0000 UTC m=+38.632521332" observedRunningTime="2025-03-20 21:21:15.648241351 +0000 UTC m=+39.408548560" watchObservedRunningTime="2025-03-20 21:21:15.648854854 +0000 UTC m=+39.409162063" Mar 20 21:21:16.595129 containerd[1483]: time="2025-03-20T21:21:16.595052226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:16.595869 containerd[1483]: time="2025-03-20T21:21:16.595797206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 20 21:21:16.597030 containerd[1483]: time="2025-03-20T21:21:16.596974019Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:16.599214 containerd[1483]: time="2025-03-20T21:21:16.599185065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:16.599708 containerd[1483]: time="2025-03-20T21:21:16.599685475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.727291755s" Mar 20 21:21:16.599763 containerd[1483]: time="2025-03-20T21:21:16.599713598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 20 21:21:16.600642 containerd[1483]: time="2025-03-20T21:21:16.600615133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 20 21:21:16.602253 containerd[1483]: time="2025-03-20T21:21:16.601816572Z" level=info msg="CreateContainer within sandbox \"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 20 21:21:16.608708 kubelet[2587]: I0320 21:21:16.608677 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:21:16.609298 kubelet[2587]: E0320 21:21:16.609272 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:16.617280 containerd[1483]: time="2025-03-20T21:21:16.617242348Z" level=info msg="Container 09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:16.626454 containerd[1483]: time="2025-03-20T21:21:16.626415732Z" level=info msg="CreateContainer within sandbox \"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab\"" Mar 20 21:21:16.626992 containerd[1483]: time="2025-03-20T21:21:16.626968040Z" level=info msg="StartContainer for \"09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab\"" Mar 20 21:21:16.628346 containerd[1483]: time="2025-03-20T21:21:16.628313339Z" level=info msg="connecting to shim 09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab" address="unix:///run/containerd/s/7915da30cfad3eb2e9471c4c71688cfffcf399f517825dabfdba799bcd096c74" protocol=ttrpc version=3 Mar 20 21:21:16.653800 systemd[1]: Started cri-containerd-09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab.scope - libcontainer container 09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab. Mar 20 21:21:16.700004 containerd[1483]: time="2025-03-20T21:21:16.699945402Z" level=info msg="StartContainer for \"09b6909ff03e3e69aaa64cda686dada3e94b1e140fa1cb97423a0b1cc7e4c5ab\" returns successfully" Mar 20 21:21:18.972863 containerd[1483]: time="2025-03-20T21:21:18.972801426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:18.973523 containerd[1483]: time="2025-03-20T21:21:18.973473930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 20 21:21:18.974734 containerd[1483]: time="2025-03-20T21:21:18.974677241Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:18.976508 containerd[1483]: time="2025-03-20T21:21:18.976470892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:18.977090 containerd[1483]: time="2025-03-20T21:21:18.977053427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.376312478s" Mar 20 21:21:18.977090 containerd[1483]: time="2025-03-20T21:21:18.977085888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 20 21:21:18.978077 containerd[1483]: time="2025-03-20T21:21:18.978047405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 20 21:21:18.986669 containerd[1483]: time="2025-03-20T21:21:18.986625125Z" level=info msg="CreateContainer within sandbox \"ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 20 21:21:18.995395 containerd[1483]: time="2025-03-20T21:21:18.995351506Z" level=info msg="Container 104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:19.004528 containerd[1483]: time="2025-03-20T21:21:19.004476974Z" level=info msg="CreateContainer within sandbox \"ee482b04b6f6b1571005f01b6475f1c263f125a0c209f56cc380ffbffeaff9df\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\"" Mar 20 21:21:19.004992 containerd[1483]: time="2025-03-20T21:21:19.004961655Z" level=info msg="StartContainer for \"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\"" Mar 20 21:21:19.006070 containerd[1483]: time="2025-03-20T21:21:19.006036936Z" level=info msg="connecting to shim 104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f" address="unix:///run/containerd/s/a98db47b71f5de1855810200a5fcf2ab1db8bbcaef710080bc7b2cbe62b22910" protocol=ttrpc version=3 Mar 20 21:21:19.033927 systemd[1]: Started cri-containerd-104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f.scope - libcontainer container 104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f. Mar 20 21:21:19.083301 containerd[1483]: time="2025-03-20T21:21:19.083253135Z" level=info msg="StartContainer for \"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\" returns successfully" Mar 20 21:21:19.631497 kubelet[2587]: I0320 21:21:19.631415 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cf6b68695-xfrlz" podStartSLOduration=25.386335501 podStartE2EDuration="31.631397308s" podCreationTimestamp="2025-03-20 21:20:48 +0000 UTC" firstStartedPulling="2025-03-20 21:21:12.73281613 +0000 UTC m=+36.493123339" lastFinishedPulling="2025-03-20 21:21:18.977877937 +0000 UTC m=+42.738185146" observedRunningTime="2025-03-20 21:21:19.630852224 +0000 UTC m=+43.391159433" watchObservedRunningTime="2025-03-20 21:21:19.631397308 +0000 UTC m=+43.391704517" Mar 20 21:21:19.869998 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). Mar 20 21:21:19.957692 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:19.959812 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:19.965398 systemd-logind[1460]: New session 13 of user core. Mar 20 21:21:19.975752 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:21:20.112101 sshd[4689]: Connection closed by 10.0.0.1 port 38988 Mar 20 21:21:20.112573 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:20.117235 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:38988.service: Deactivated successfully. Mar 20 21:21:20.119814 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:21:20.120586 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:21:20.121558 systemd-logind[1460]: Removed session 13. Mar 20 21:21:20.621119 kubelet[2587]: I0320 21:21:20.621069 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:21:21.383294 containerd[1483]: time="2025-03-20T21:21:21.383220045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:21.384049 containerd[1483]: time="2025-03-20T21:21:21.383974102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 20 21:21:21.385341 containerd[1483]: time="2025-03-20T21:21:21.385292529Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:21.387327 containerd[1483]: time="2025-03-20T21:21:21.387293548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:21:21.387838 containerd[1483]: time="2025-03-20T21:21:21.387806453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.409726156s" Mar 20 21:21:21.387875 containerd[1483]: time="2025-03-20T21:21:21.387838543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 20 21:21:21.390091 containerd[1483]: time="2025-03-20T21:21:21.390042834Z" level=info msg="CreateContainer within sandbox \"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 20 21:21:21.398021 containerd[1483]: time="2025-03-20T21:21:21.397969136Z" level=info msg="Container be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:21:21.408793 containerd[1483]: time="2025-03-20T21:21:21.408740863Z" level=info msg="CreateContainer within sandbox \"28fd4a3303347eac8e5017fa4da451f8459b3adbe7d368dba4ea8a8633b41622\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306\"" Mar 20 21:21:21.409491 containerd[1483]: time="2025-03-20T21:21:21.409392608Z" level=info msg="StartContainer for \"be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306\"" Mar 20 21:21:21.411248 containerd[1483]: time="2025-03-20T21:21:21.411210143Z" level=info msg="connecting to shim be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306" address="unix:///run/containerd/s/7915da30cfad3eb2e9471c4c71688cfffcf399f517825dabfdba799bcd096c74" protocol=ttrpc version=3 Mar 20 21:21:21.442777 systemd[1]: Started cri-containerd-be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306.scope - libcontainer container be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306. Mar 20 21:21:21.699687 containerd[1483]: time="2025-03-20T21:21:21.699636125Z" level=info msg="StartContainer for \"be815de60dfbd28a729ace23eaf3a4c77cc9f00e3a2276919260cc2a790cc306\" returns successfully" Mar 20 21:21:22.436629 kubelet[2587]: I0320 21:21:22.436549 2587 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 20 21:21:22.436629 kubelet[2587]: I0320 21:21:22.436635 2587 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 20 21:21:25.129508 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:56424.service - OpenSSH per-connection server daemon (10.0.0.1:56424). Mar 20 21:21:25.191503 sshd[4747]: Accepted publickey for core from 10.0.0.1 port 56424 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:25.193562 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:25.198435 systemd-logind[1460]: New session 14 of user core. Mar 20 21:21:25.208759 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:21:25.334201 sshd[4749]: Connection closed by 10.0.0.1 port 56424 Mar 20 21:21:25.334904 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:25.340299 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:56424.service: Deactivated successfully. Mar 20 21:21:25.342827 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:21:25.343566 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:21:25.344560 systemd-logind[1460]: Removed session 14. Mar 20 21:21:25.571050 kubelet[2587]: I0320 21:21:25.570999 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:21:25.615731 containerd[1483]: time="2025-03-20T21:21:25.615683548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\" id:\"1696abac2ea4be47868ad8d3494cd7af5d826165eaab602e81ac9057a58d5736\" pid:4773 exited_at:{seconds:1742505685 nanos:615160175}" Mar 20 21:21:25.629002 kubelet[2587]: I0320 21:21:25.628916 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rrs62" podStartSLOduration=28.074341302 podStartE2EDuration="37.628890293s" podCreationTimestamp="2025-03-20 21:20:48 +0000 UTC" firstStartedPulling="2025-03-20 21:21:11.834100145 +0000 UTC m=+35.594407354" lastFinishedPulling="2025-03-20 21:21:21.388649146 +0000 UTC m=+45.148956345" observedRunningTime="2025-03-20 21:21:22.716633147 +0000 UTC m=+46.476940466" watchObservedRunningTime="2025-03-20 21:21:25.628890293 +0000 UTC m=+49.389197512" Mar 20 21:21:25.660000 containerd[1483]: time="2025-03-20T21:21:25.659944987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\" id:\"f0013310603df4a2c35f73a2f585e09d39b682b3be44ce868c34bd915d02bb83\" pid:4796 exited_at:{seconds:1742505685 nanos:659772895}" Mar 20 21:21:29.130416 containerd[1483]: time="2025-03-20T21:21:29.130214518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\" id:\"8f73ebfdf4fcafb0fedade29457798f9cf8c9a604a58d5504ffcb76e6e916217\" pid:4824 exited_at:{seconds:1742505689 nanos:129758171}" Mar 20 21:21:29.132753 kubelet[2587]: E0320 21:21:29.132723 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:30.347139 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:56434.service - OpenSSH per-connection server daemon (10.0.0.1:56434). Mar 20 21:21:30.405238 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 56434 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:30.406997 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:30.412787 systemd-logind[1460]: New session 15 of user core. Mar 20 21:21:30.422743 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:21:30.550256 sshd[4840]: Connection closed by 10.0.0.1 port 56434 Mar 20 21:21:30.550688 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:30.555440 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:56434.service: Deactivated successfully. Mar 20 21:21:30.557578 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:21:30.558386 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:21:30.559479 systemd-logind[1460]: Removed session 15. Mar 20 21:21:35.262058 containerd[1483]: time="2025-03-20T21:21:35.261937469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\" id:\"08340d290454a91c5665494c02525df8bfa6967e2c273021b4d23481f41f87bb\" pid:4868 exited_at:{seconds:1742505695 nanos:261718729}" Mar 20 21:21:35.562784 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:34460.service - OpenSSH per-connection server daemon (10.0.0.1:34460). Mar 20 21:21:35.620988 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 34460 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:35.622465 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:35.626567 systemd-logind[1460]: New session 16 of user core. Mar 20 21:21:35.634725 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:21:35.748143 sshd[4882]: Connection closed by 10.0.0.1 port 34460 Mar 20 21:21:35.749147 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:35.759705 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:34460.service: Deactivated successfully. Mar 20 21:21:35.761743 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:21:35.762492 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:21:35.765013 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:34468.service - OpenSSH per-connection server daemon (10.0.0.1:34468). Mar 20 21:21:35.766045 systemd-logind[1460]: Removed session 16. Mar 20 21:21:35.817167 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 34468 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:35.818962 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:35.823937 systemd-logind[1460]: New session 17 of user core. Mar 20 21:21:35.834863 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:21:36.110926 sshd[4897]: Connection closed by 10.0.0.1 port 34468 Mar 20 21:21:36.111250 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:36.126243 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:34468.service: Deactivated successfully. Mar 20 21:21:36.129486 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:21:36.132108 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:21:36.134120 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:34484.service - OpenSSH per-connection server daemon (10.0.0.1:34484). Mar 20 21:21:36.135014 systemd-logind[1460]: Removed session 17. Mar 20 21:21:36.190308 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 34484 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:36.191959 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:36.196946 systemd-logind[1460]: New session 18 of user core. Mar 20 21:21:36.206734 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:21:37.496124 kubelet[2587]: I0320 21:21:37.496030 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:21:37.980110 sshd[4910]: Connection closed by 10.0.0.1 port 34484 Mar 20 21:21:37.981942 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:37.993815 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:34484.service: Deactivated successfully. Mar 20 21:21:37.997290 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:21:37.998403 systemd[1]: session-18.scope: Consumed 654ms CPU time, 68.2M memory peak. Mar 20 21:21:38.001295 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:21:38.004245 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:34498.service - OpenSSH per-connection server daemon (10.0.0.1:34498). Mar 20 21:21:38.004902 systemd-logind[1460]: Removed session 18. Mar 20 21:21:38.077613 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 34498 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:38.079447 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:38.084105 systemd-logind[1460]: New session 19 of user core. Mar 20 21:21:38.092749 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:21:38.341856 sshd[4933]: Connection closed by 10.0.0.1 port 34498 Mar 20 21:21:38.344111 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:38.354239 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:34498.service: Deactivated successfully. Mar 20 21:21:38.357571 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:21:38.359529 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:21:38.361281 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). Mar 20 21:21:38.362630 systemd-logind[1460]: Removed session 19. Mar 20 21:21:38.414484 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:38.417554 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:38.425285 systemd-logind[1460]: New session 20 of user core. Mar 20 21:21:38.429847 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:21:38.554755 sshd[4948]: Connection closed by 10.0.0.1 port 34502 Mar 20 21:21:38.555111 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:38.559928 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:34502.service: Deactivated successfully. Mar 20 21:21:38.562186 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:21:38.562951 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:21:38.564254 systemd-logind[1460]: Removed session 20. Mar 20 21:21:43.569646 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:45854.service - OpenSSH per-connection server daemon (10.0.0.1:45854). Mar 20 21:21:43.627342 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 45854 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:43.629374 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:43.634429 systemd-logind[1460]: New session 21 of user core. Mar 20 21:21:43.640750 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:21:43.754857 sshd[4971]: Connection closed by 10.0.0.1 port 45854 Mar 20 21:21:43.755203 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:43.759947 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:45854.service: Deactivated successfully. Mar 20 21:21:43.762707 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:21:43.763433 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:21:43.764381 systemd-logind[1460]: Removed session 21. Mar 20 21:21:48.768243 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:45856.service - OpenSSH per-connection server daemon (10.0.0.1:45856). Mar 20 21:21:48.824186 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 45856 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:48.825786 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:48.830677 systemd-logind[1460]: New session 22 of user core. Mar 20 21:21:48.837749 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:21:48.955109 sshd[4992]: Connection closed by 10.0.0.1 port 45856 Mar 20 21:21:48.955504 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:48.960176 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:45856.service: Deactivated successfully. Mar 20 21:21:48.962410 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:21:48.963328 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:21:48.964535 systemd-logind[1460]: Removed session 22. Mar 20 21:21:53.970922 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:58598.service - OpenSSH per-connection server daemon (10.0.0.1:58598). Mar 20 21:21:54.024395 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:54.026353 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:54.031422 systemd-logind[1460]: New session 23 of user core. Mar 20 21:21:54.038753 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:21:54.151157 sshd[5009]: Connection closed by 10.0.0.1 port 58598 Mar 20 21:21:54.151493 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:54.155666 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:58598.service: Deactivated successfully. Mar 20 21:21:54.158238 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:21:54.159087 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:21:54.160666 systemd-logind[1460]: Removed session 23. Mar 20 21:21:55.611087 containerd[1483]: time="2025-03-20T21:21:55.611019047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"104f2fb6c88cb142e71b8b34f5c0d82ba63f53f2950bde04f4b79deed870b23f\" id:\"94583d54dbd793cf1b9c0c47a8c519c25252ec4d06852f079bbf3f24aeb35107\" pid:5034 exited_at:{seconds:1742505715 nanos:610749702}" Mar 20 21:21:57.369692 kubelet[2587]: E0320 21:21:57.369591 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:21:59.125567 containerd[1483]: time="2025-03-20T21:21:59.125519903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f8ac0973caa34744c3955cb61a03ff291d7ec7470b639d6e43ea41489eb4808\" id:\"a4f4a4af34f05972b6f372334e7a7206585ac1a48245534affa115eb7151471d\" pid:5056 exited_at:{seconds:1742505719 nanos:125147894}" Mar 20 21:21:59.164452 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Mar 20 21:21:59.212253 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:21:59.213778 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:21:59.218109 systemd-logind[1460]: New session 24 of user core. Mar 20 21:21:59.224831 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 21:21:59.332728 sshd[5071]: Connection closed by 10.0.0.1 port 58608 Mar 20 21:21:59.333116 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Mar 20 21:21:59.337387 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:58608.service: Deactivated successfully. Mar 20 21:21:59.339961 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 21:21:59.340697 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Mar 20 21:21:59.341706 systemd-logind[1460]: Removed session 24.