Jan 30 13:40:15.883154 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:40:15.883174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:40:15.883194 kernel: BIOS-provided physical RAM map: Jan 30 13:40:15.883201 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:40:15.883207 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:40:15.883213 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:40:15.883220 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:40:15.883227 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:40:15.883233 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:40:15.883240 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:40:15.883248 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:40:15.883254 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:40:15.883261 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:40:15.883267 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:40:15.883275 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:40:15.883282 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:40:15.883291 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:40:15.883297 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:40:15.883304 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:40:15.883311 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:40:15.883317 kernel: NX (Execute Disable) protection: active Jan 30 13:40:15.883324 kernel: APIC: Static calls initialized Jan 30 13:40:15.883331 kernel: efi: EFI v2.7 by EDK II Jan 30 13:40:15.883337 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:40:15.883344 kernel: SMBIOS 2.8 present. Jan 30 13:40:15.883351 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:40:15.883357 kernel: Hypervisor detected: KVM Jan 30 13:40:15.883366 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:40:15.883373 kernel: kvm-clock: using sched offset of 3969570790 cycles Jan 30 13:40:15.883380 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:40:15.883387 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:40:15.883394 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:40:15.883401 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:40:15.883408 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:40:15.883415 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:40:15.883422 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:40:15.883431 kernel: Using GB pages for direct mapping Jan 30 13:40:15.883438 kernel: Secure boot disabled Jan 30 13:40:15.883445 kernel: ACPI: Early table checksum verification disabled Jan 30 13:40:15.883452 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:40:15.883462 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:40:15.883469 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883476 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883486 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:40:15.883493 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883500 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883507 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883514 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:15.883521 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:40:15.883529 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:40:15.883538 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:40:15.883545 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:40:15.883552 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:40:15.883559 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:40:15.883566 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:40:15.883573 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:40:15.883580 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:40:15.883599 kernel: No NUMA configuration found Jan 30 13:40:15.883606 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:40:15.883616 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:40:15.883623 kernel: Zone ranges: Jan 30 13:40:15.883630 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:40:15.883637 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:40:15.883644 kernel: Normal empty Jan 30 13:40:15.883651 kernel: Movable zone start for each node Jan 30 13:40:15.883658 kernel: Early memory node ranges Jan 30 13:40:15.883665 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:40:15.883672 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:40:15.883680 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:40:15.883689 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:40:15.883696 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:40:15.883703 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:40:15.883710 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:40:15.883717 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:40:15.883724 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:40:15.883731 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:40:15.883738 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:40:15.883746 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:40:15.883755 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:40:15.883763 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:40:15.883770 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:40:15.883777 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:40:15.883784 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:40:15.883791 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:40:15.883798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:40:15.883805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:40:15.883812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:40:15.883822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:40:15.883829 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:40:15.883836 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:40:15.883843 kernel: TSC deadline timer available Jan 30 13:40:15.883850 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:40:15.883857 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:40:15.883864 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:40:15.883871 kernel: kvm-guest: setup PV sched yield Jan 30 13:40:15.883879 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:40:15.883886 kernel: Booting paravirtualized kernel on KVM Jan 30 13:40:15.883895 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:40:15.883903 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:40:15.883910 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:40:15.883917 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:40:15.883924 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:40:15.883931 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:40:15.883938 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:40:15.883946 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:40:15.883956 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:40:15.883963 kernel: random: crng init done Jan 30 13:40:15.883970 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:40:15.883978 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:40:15.883985 kernel: Fallback order for Node 0: 0 Jan 30 13:40:15.883993 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:40:15.884000 kernel: Policy zone: DMA32 Jan 30 13:40:15.884007 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:40:15.884014 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 30 13:40:15.884024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:40:15.884031 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:40:15.884038 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:40:15.884045 kernel: Dynamic Preempt: voluntary Jan 30 13:40:15.884060 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:40:15.884075 kernel: rcu: RCU event tracing is enabled. Jan 30 13:40:15.884083 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:40:15.884091 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:40:15.884098 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:40:15.884106 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:40:15.884113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:40:15.884121 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:40:15.884130 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:40:15.884138 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:40:15.884145 kernel: Console: colour dummy device 80x25 Jan 30 13:40:15.884153 kernel: printk: console [ttyS0] enabled Jan 30 13:40:15.884160 kernel: ACPI: Core revision 20230628 Jan 30 13:40:15.884170 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:40:15.884186 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:40:15.884194 kernel: x2apic enabled Jan 30 13:40:15.884201 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:40:15.884209 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:40:15.884216 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:40:15.884224 kernel: kvm-guest: setup PV IPIs Jan 30 13:40:15.884231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:40:15.884239 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:40:15.884248 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:40:15.884256 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:40:15.884263 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:40:15.884271 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:40:15.884278 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:40:15.884286 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:40:15.884293 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:40:15.884301 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:40:15.884308 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:40:15.884318 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:40:15.884325 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:40:15.884334 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:40:15.884343 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:40:15.884352 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:40:15.884361 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:40:15.884369 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:40:15.884376 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:40:15.884386 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:40:15.884393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:40:15.884401 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:40:15.884409 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:40:15.884416 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:40:15.884423 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:40:15.884431 kernel: landlock: Up and running. Jan 30 13:40:15.884438 kernel: SELinux: Initializing. Jan 30 13:40:15.884446 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:40:15.884455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:40:15.884463 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:40:15.884471 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:40:15.884478 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:40:15.884486 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:40:15.884494 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:40:15.884501 kernel: ... version: 0 Jan 30 13:40:15.884509 kernel: ... bit width: 48 Jan 30 13:40:15.884516 kernel: ... generic registers: 6 Jan 30 13:40:15.884526 kernel: ... value mask: 0000ffffffffffff Jan 30 13:40:15.884533 kernel: ... max period: 00007fffffffffff Jan 30 13:40:15.884541 kernel: ... fixed-purpose events: 0 Jan 30 13:40:15.884548 kernel: ... event mask: 000000000000003f Jan 30 13:40:15.884556 kernel: signal: max sigframe size: 1776 Jan 30 13:40:15.884563 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:40:15.884571 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:40:15.884578 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:40:15.884586 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:40:15.884606 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:40:15.884614 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:40:15.884621 kernel: smpboot: Max logical packages: 1 Jan 30 13:40:15.884629 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:40:15.884636 kernel: devtmpfs: initialized Jan 30 13:40:15.884643 kernel: x86/mm: Memory block size: 128MB Jan 30 13:40:15.884651 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:40:15.884659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:40:15.884666 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:40:15.884676 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:40:15.884684 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:40:15.884691 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:40:15.884699 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:40:15.884706 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:40:15.884714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:40:15.884721 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:40:15.884729 kernel: audit: type=2000 audit(1738244415.791:1): state=initialized audit_enabled=0 res=1 Jan 30 13:40:15.884736 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:40:15.884746 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:40:15.884753 kernel: cpuidle: using governor menu Jan 30 13:40:15.884761 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:40:15.884768 kernel: dca service started, version 1.12.1 Jan 30 13:40:15.884776 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:40:15.884783 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:40:15.884791 kernel: PCI: Using configuration type 1 for base access Jan 30 13:40:15.884798 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:40:15.884806 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:40:15.884816 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:40:15.884824 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:40:15.884831 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:40:15.884839 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:40:15.884846 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:40:15.884853 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:40:15.884861 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:40:15.884868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:40:15.884876 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:40:15.884885 kernel: ACPI: Interpreter enabled Jan 30 13:40:15.884893 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:40:15.884900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:40:15.884908 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:40:15.884915 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:40:15.884923 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:40:15.884930 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:40:15.885110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:40:15.885252 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:40:15.885375 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:40:15.885385 kernel: PCI host bridge to bus 0000:00 Jan 30 13:40:15.885509 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:40:15.885648 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:40:15.885836 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:40:15.885947 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:40:15.886061 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:40:15.886173 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:40:15.886296 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:40:15.886434 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:40:15.886571 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:40:15.886710 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:40:15.886835 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:40:15.886954 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:40:15.887159 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:40:15.887290 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:40:15.887483 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:40:15.887625 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:40:15.887747 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:40:15.887873 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:40:15.888001 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:40:15.888121 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:40:15.888251 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:40:15.888371 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:40:15.888498 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:40:15.888675 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:40:15.888832 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:40:15.888953 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:40:15.889071 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:40:15.889208 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:40:15.889329 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:40:15.889455 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:40:15.889660 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:40:15.889789 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:40:15.889917 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:40:15.890037 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:40:15.890047 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:40:15.890055 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:40:15.890063 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:40:15.890070 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:40:15.890082 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:40:15.890090 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:40:15.890098 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:40:15.890105 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:40:15.890113 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:40:15.890120 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:40:15.890128 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:40:15.890135 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:40:15.890143 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:40:15.890152 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:40:15.890160 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:40:15.890167 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:40:15.890175 kernel: iommu: Default domain type: Translated Jan 30 13:40:15.890192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:40:15.890199 kernel: efivars: Registered efivars operations Jan 30 13:40:15.890207 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:40:15.890214 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:40:15.890222 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:40:15.890232 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:40:15.890240 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:40:15.890247 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:40:15.890369 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:40:15.890487 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:40:15.890628 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:40:15.890638 kernel: vgaarb: loaded Jan 30 13:40:15.890646 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:40:15.890654 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:40:15.890666 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:40:15.890674 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:40:15.890682 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:40:15.890689 kernel: pnp: PnP ACPI init Jan 30 13:40:15.890825 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:40:15.890835 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:40:15.890843 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:40:15.890851 kernel: NET: Registered PF_INET protocol family Jan 30 13:40:15.890862 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:40:15.890870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:40:15.890877 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:40:15.890885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:40:15.890893 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:40:15.890901 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:40:15.890908 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:40:15.890916 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:40:15.890923 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:40:15.890933 kernel: NET: Registered PF_XDP protocol family Jan 30 13:40:15.891053 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:40:15.891174 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:40:15.891296 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:40:15.891406 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:40:15.891514 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:40:15.891636 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:40:15.891746 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:40:15.891860 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:40:15.891870 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:40:15.891878 kernel: Initialise system trusted keyrings Jan 30 13:40:15.891886 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:40:15.891893 kernel: Key type asymmetric registered Jan 30 13:40:15.891901 kernel: Asymmetric key parser 'x509' registered Jan 30 13:40:15.891908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:40:15.891916 kernel: io scheduler mq-deadline registered Jan 30 13:40:15.891926 kernel: io scheduler kyber registered Jan 30 13:40:15.891934 kernel: io scheduler bfq registered Jan 30 13:40:15.891941 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:40:15.891950 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:40:15.891957 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:40:15.891965 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:40:15.891972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:40:15.891980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:40:15.891988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:40:15.891995 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:40:15.892005 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:40:15.892139 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:40:15.892151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:40:15.892278 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:40:15.892401 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:40:15 UTC (1738244415) Jan 30 13:40:15.892516 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:40:15.892526 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:40:15.892537 kernel: efifb: probing for efifb Jan 30 13:40:15.892545 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:40:15.892553 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:40:15.892560 kernel: efifb: scrolling: redraw Jan 30 13:40:15.892568 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:40:15.892576 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:40:15.892613 kernel: fb0: EFI VGA frame buffer device Jan 30 13:40:15.892623 kernel: pstore: Using crash dump compression: deflate Jan 30 13:40:15.892631 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:40:15.892641 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:40:15.892649 kernel: Segment Routing with IPv6 Jan 30 13:40:15.892657 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:40:15.892664 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:40:15.892672 kernel: Key type dns_resolver registered Jan 30 13:40:15.892680 kernel: IPI shorthand broadcast: enabled Jan 30 13:40:15.892688 kernel: sched_clock: Marking stable (557003407, 113879885)->(718532609, -47649317) Jan 30 13:40:15.892696 kernel: registered taskstats version 1 Jan 30 13:40:15.892703 kernel: Loading compiled-in X.509 certificates Jan 30 13:40:15.892711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:40:15.892722 kernel: Key type .fscrypt registered Jan 30 13:40:15.892730 kernel: Key type fscrypt-provisioning registered Jan 30 13:40:15.892738 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:40:15.892746 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:40:15.892753 kernel: ima: No architecture policies found Jan 30 13:40:15.892761 kernel: clk: Disabling unused clocks Jan 30 13:40:15.892769 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:40:15.892777 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:40:15.892787 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:40:15.892795 kernel: Run /init as init process Jan 30 13:40:15.892803 kernel: with arguments: Jan 30 13:40:15.892810 kernel: /init Jan 30 13:40:15.892818 kernel: with environment: Jan 30 13:40:15.892828 kernel: HOME=/ Jan 30 13:40:15.892835 kernel: TERM=linux Jan 30 13:40:15.892843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:40:15.892853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:40:15.892866 systemd[1]: Detected virtualization kvm. Jan 30 13:40:15.892874 systemd[1]: Detected architecture x86-64. Jan 30 13:40:15.892883 systemd[1]: Running in initrd. Jan 30 13:40:15.892893 systemd[1]: No hostname configured, using default hostname. Jan 30 13:40:15.892903 systemd[1]: Hostname set to . Jan 30 13:40:15.892911 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:40:15.892920 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:40:15.892928 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:15.892937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:15.892946 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:40:15.892954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:40:15.892963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:40:15.892974 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:40:15.892984 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:40:15.892992 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:40:15.893001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:15.893009 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:15.893018 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:40:15.893026 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:40:15.893037 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:40:15.893045 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:40:15.893054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:40:15.893062 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:40:15.893071 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:40:15.893079 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:40:15.893088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:15.893096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:15.893107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:15.893116 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:40:15.893124 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:40:15.893132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:40:15.893141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:40:15.893149 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:40:15.893158 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:40:15.893166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:40:15.893175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:15.893194 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:40:15.893202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:15.893211 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:40:15.893237 systemd-journald[191]: Collecting audit messages is disabled. Jan 30 13:40:15.893257 systemd-journald[191]: Journal started Jan 30 13:40:15.893275 systemd-journald[191]: Runtime Journal (/run/log/journal/4999ef40674f40078004c2e98e91a209) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:40:15.897145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:40:15.887503 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:40:15.898688 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:40:15.900457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:15.904881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:15.907991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:40:15.909387 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:40:15.912045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:40:15.924536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:15.924820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:15.931459 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:15.935087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:40:15.935956 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:40:15.939429 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:40:15.940390 kernel: Bridge firewalling registered Jan 30 13:40:15.941911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:15.944376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:40:15.953216 dracut-cmdline[222]: dracut-dracut-053 Jan 30 13:40:15.956809 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:40:15.962027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:15.969715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:40:16.003068 systemd-resolved[248]: Positive Trust Anchors: Jan 30 13:40:16.003089 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:40:16.003131 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:40:16.006023 systemd-resolved[248]: Defaulting to hostname 'linux'. Jan 30 13:40:16.007166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:40:16.012935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:16.044630 kernel: SCSI subsystem initialized Jan 30 13:40:16.054620 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:40:16.064622 kernel: iscsi: registered transport (tcp) Jan 30 13:40:16.084820 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:40:16.084850 kernel: QLogic iSCSI HBA Driver Jan 30 13:40:16.131243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:40:16.142712 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:40:16.166705 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:40:16.166758 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:40:16.167792 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:40:16.209624 kernel: raid6: avx2x4 gen() 30342 MB/s Jan 30 13:40:16.226612 kernel: raid6: avx2x2 gen() 31221 MB/s Jan 30 13:40:16.243696 kernel: raid6: avx2x1 gen() 25823 MB/s Jan 30 13:40:16.243711 kernel: raid6: using algorithm avx2x2 gen() 31221 MB/s Jan 30 13:40:16.261726 kernel: raid6: .... xor() 19812 MB/s, rmw enabled Jan 30 13:40:16.261743 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:40:16.282617 kernel: xor: automatically using best checksumming function avx Jan 30 13:40:16.437641 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:40:16.450996 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:40:16.466768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:16.478367 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 30 13:40:16.483013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:16.493739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:40:16.506355 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 30 13:40:16.536467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:40:16.545733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:40:16.612229 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:16.620794 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:40:16.632655 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:40:16.635888 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:40:16.638457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:16.643657 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:40:16.661311 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:40:16.661632 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:40:16.661654 kernel: GPT:9289727 != 19775487 Jan 30 13:40:16.661675 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:40:16.661718 kernel: GPT:9289727 != 19775487 Jan 30 13:40:16.661737 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:40:16.661758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:16.661778 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:40:16.639737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:40:16.655148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:40:16.666443 kernel: libata version 3.00 loaded. Jan 30 13:40:16.666384 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:40:16.674782 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:40:16.709530 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:40:16.709549 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:40:16.709771 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:40:16.709933 kernel: scsi host0: ahci Jan 30 13:40:16.710097 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:40:16.710109 kernel: AES CTR mode by8 optimization enabled Jan 30 13:40:16.710119 kernel: scsi host1: ahci Jan 30 13:40:16.710299 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (458) Jan 30 13:40:16.710321 kernel: scsi host2: ahci Jan 30 13:40:16.710479 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 30 13:40:16.710495 kernel: scsi host3: ahci Jan 30 13:40:16.710664 kernel: scsi host4: ahci Jan 30 13:40:16.710818 kernel: scsi host5: ahci Jan 30 13:40:16.710977 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:40:16.710988 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:40:16.710998 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:40:16.711009 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:40:16.711023 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:40:16.711033 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:40:16.681922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:40:16.682036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:16.684707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:16.685939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:16.686291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:16.689288 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:16.698027 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:16.718988 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:40:16.735305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:40:16.739333 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:40:16.739407 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:40:16.746250 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:40:16.754843 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:40:16.756022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:16.756090 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:16.758610 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:16.764281 disk-uuid[552]: Primary Header is updated. Jan 30 13:40:16.764281 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:40:16.764281 disk-uuid[552]: Secondary Header is updated. Jan 30 13:40:16.767644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:16.761587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:16.771611 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:16.780355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:16.792749 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:16.814984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:17.019630 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:40:17.019707 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:40:17.020639 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:40:17.020718 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:40:17.021622 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:40:17.022854 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:40:17.022870 kernel: ata3.00: applying bridge limits Jan 30 13:40:17.023613 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:40:17.024616 kernel: ata3.00: configured for UDMA/100 Jan 30 13:40:17.026621 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:40:17.082214 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:40:17.094191 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:40:17.094205 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:40:17.773438 disk-uuid[554]: The operation has completed successfully. Jan 30 13:40:17.774826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:17.802827 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:40:17.802960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:40:17.821703 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:40:17.827295 sh[595]: Success Jan 30 13:40:17.839609 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:40:17.870680 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:40:17.882913 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:40:17.887469 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:40:17.899603 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:40:17.899630 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:17.899641 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:40:17.899652 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:40:17.900943 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:40:17.904831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:40:17.906335 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:40:17.915812 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:40:17.918485 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:40:17.925760 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:40:17.925787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:17.925798 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:17.928620 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:17.937699 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:40:17.939324 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:40:17.949613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:40:17.955717 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:40:18.006826 ignition[683]: Ignition 2.19.0 Jan 30 13:40:18.007196 ignition[683]: Stage: fetch-offline Jan 30 13:40:18.007234 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.007244 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:18.007334 ignition[683]: parsed url from cmdline: "" Jan 30 13:40:18.007338 ignition[683]: no config URL provided Jan 30 13:40:18.007343 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:40:18.007352 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:40:18.007377 ignition[683]: op(1): [started] loading QEMU firmware config module Jan 30 13:40:18.007383 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:40:18.014442 ignition[683]: op(1): [finished] loading QEMU firmware config module Jan 30 13:40:18.034323 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:40:18.051763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:40:18.064160 ignition[683]: parsing config with SHA512: 71be51a1ef1b80724c4a04e86bbe8c61f2d3207dc6223d3a356df8ee4cc5cbab9983a25941868e50868e21df5ddc5404e0e66fa5d8cb64faabacc06ecb50df93 Jan 30 13:40:18.067938 unknown[683]: fetched base config from "system" Jan 30 13:40:18.068459 ignition[683]: fetch-offline: fetch-offline passed Jan 30 13:40:18.067956 unknown[683]: fetched user config from "qemu" Jan 30 13:40:18.068544 ignition[683]: Ignition finished successfully Jan 30 13:40:18.073680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:40:18.074727 systemd-networkd[783]: lo: Link UP Jan 30 13:40:18.074731 systemd-networkd[783]: lo: Gained carrier Jan 30 13:40:18.076243 systemd-networkd[783]: Enumeration completed Jan 30 13:40:18.076311 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:40:18.076636 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:18.076640 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:40:18.077801 systemd-networkd[783]: eth0: Link UP Jan 30 13:40:18.077805 systemd-networkd[783]: eth0: Gained carrier Jan 30 13:40:18.077811 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:18.078506 systemd[1]: Reached target network.target - Network. Jan 30 13:40:18.080009 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:40:18.089727 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:40:18.095658 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:40:18.102483 ignition[786]: Ignition 2.19.0 Jan 30 13:40:18.102493 ignition[786]: Stage: kargs Jan 30 13:40:18.102666 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.102677 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:18.106433 ignition[786]: kargs: kargs passed Jan 30 13:40:18.106481 ignition[786]: Ignition finished successfully Jan 30 13:40:18.110742 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:40:18.128724 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:40:18.141147 ignition[796]: Ignition 2.19.0 Jan 30 13:40:18.141157 ignition[796]: Stage: disks Jan 30 13:40:18.141308 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.141319 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:18.144998 ignition[796]: disks: disks passed Jan 30 13:40:18.145044 ignition[796]: Ignition finished successfully Jan 30 13:40:18.148192 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:40:18.148457 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:40:18.151260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:40:18.152498 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:40:18.153529 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:40:18.155566 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:40:18.172770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:40:18.182209 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.64 Jan 30 13:40:18.182224 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jan 30 13:40:18.184933 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:40:18.191015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:40:18.197680 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:40:18.282609 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:40:18.282909 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:40:18.284324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:40:18.297659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:40:18.299341 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:40:18.300579 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:40:18.305915 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 30 13:40:18.305931 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:40:18.300629 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:40:18.312456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:18.312472 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:18.312482 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:18.300649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:40:18.307213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:40:18.313623 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:40:18.316472 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:40:18.352099 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:40:18.356826 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:40:18.361510 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:40:18.366218 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:40:18.451348 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:40:18.461757 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:40:18.463440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:40:18.469611 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:40:18.488529 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:40:18.491420 ignition[928]: INFO : Ignition 2.19.0 Jan 30 13:40:18.491420 ignition[928]: INFO : Stage: mount Jan 30 13:40:18.493140 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.493140 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:18.493140 ignition[928]: INFO : mount: mount passed Jan 30 13:40:18.493140 ignition[928]: INFO : Ignition finished successfully Jan 30 13:40:18.495542 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:40:18.503673 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:40:18.898062 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:40:18.907774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:40:18.914612 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 30 13:40:18.916686 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:40:18.916702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:18.916713 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:18.919617 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:18.921073 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:40:18.953354 ignition[959]: INFO : Ignition 2.19.0 Jan 30 13:40:18.953354 ignition[959]: INFO : Stage: files Jan 30 13:40:18.954992 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.954992 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:18.957765 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:40:18.959536 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:40:18.959536 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:40:18.963023 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:40:18.964828 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:40:18.964828 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:40:18.963538 unknown[959]: wrote ssh authorized keys file for user: core Jan 30 13:40:18.968701 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:40:18.968701 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:40:19.001861 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:40:19.093035 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:40:19.094974 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:40:19.094974 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:40:19.094974 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:40:19.094974 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:40:19.094974 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:40:19.103818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:40:19.316728 systemd-networkd[783]: eth0: Gained IPv6LL Jan 30 13:40:19.614891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:40:19.958251 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:40:19.958251 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:40:19.962054 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:40:19.964462 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:40:19.964462 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:40:19.964462 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:40:19.969699 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:40:19.969699 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:40:19.969699 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:40:19.969699 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:40:19.992361 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:40:19.998770 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:40:20.000356 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:40:20.000356 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:40:20.000356 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:40:20.000356 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:40:20.000356 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:40:20.000356 ignition[959]: INFO : files: files passed Jan 30 13:40:20.000356 ignition[959]: INFO : Ignition finished successfully Jan 30 13:40:20.009100 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:40:20.020722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:40:20.022478 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:40:20.024457 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:40:20.024575 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:40:20.032453 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:40:20.035485 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:20.035485 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:20.038678 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:20.041926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:40:20.042172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:40:20.056710 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:40:20.079164 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:40:20.079287 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:40:20.080394 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:40:20.082582 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:40:20.083116 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:40:20.083824 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:40:20.103139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:40:20.112765 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:40:20.123741 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:20.123880 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:20.127319 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:40:20.128454 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:40:20.128560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:40:20.132061 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:40:20.133264 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:40:20.133602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:40:20.134104 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:40:20.134439 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:40:20.134944 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:40:20.135279 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:40:20.135634 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:40:20.136123 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:40:20.136454 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:40:20.136928 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:40:20.137030 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:40:20.137640 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:20.138143 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:20.138446 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:40:20.138586 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:20.160558 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:40:20.160703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:40:20.164705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:40:20.164826 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:40:20.165921 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:40:20.166174 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:40:20.171669 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:20.171820 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:40:20.174375 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:40:20.174871 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:40:20.174954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:40:20.177703 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:40:20.177783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:40:20.179390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:40:20.179492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:40:20.181223 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:40:20.181319 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:40:20.190719 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:40:20.191649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:40:20.191760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:20.194552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:40:20.195626 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:40:20.195738 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:20.198101 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:40:20.198259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:40:20.205892 ignition[1014]: INFO : Ignition 2.19.0 Jan 30 13:40:20.205892 ignition[1014]: INFO : Stage: umount Jan 30 13:40:20.203401 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:40:20.208565 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:20.208565 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:40:20.208565 ignition[1014]: INFO : umount: umount passed Jan 30 13:40:20.208565 ignition[1014]: INFO : Ignition finished successfully Jan 30 13:40:20.203504 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:40:20.208953 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:40:20.209061 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:40:20.210762 systemd[1]: Stopped target network.target - Network. Jan 30 13:40:20.212677 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:40:20.212726 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:40:20.214511 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:40:20.214556 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:40:20.216389 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:40:20.216434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:40:20.218633 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:40:20.218678 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:40:20.220747 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:40:20.222841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:40:20.225613 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:40:20.229631 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 30 13:40:20.231733 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:40:20.231872 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:40:20.234074 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:40:20.234201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:40:20.237610 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:40:20.237680 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:20.243682 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:40:20.245172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:40:20.245223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:40:20.247491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:40:20.247538 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:20.249745 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:40:20.249792 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:20.251878 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:40:20.251925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:20.254257 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:20.266041 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:40:20.266199 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:40:20.280394 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:40:20.280580 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:20.282833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:40:20.282886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:20.284930 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:40:20.284972 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:20.286955 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:40:20.287002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:40:20.289139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:40:20.289186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:40:20.291125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:40:20.291169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:20.304720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:40:20.304771 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:40:20.304825 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:20.305158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:20.305200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:20.311261 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:40:20.311364 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:40:20.380981 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:40:20.381108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:40:20.383022 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:40:20.384759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:40:20.384811 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:40:20.394736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:40:20.402355 systemd[1]: Switching root. Jan 30 13:40:20.434047 systemd-journald[191]: Journal stopped Jan 30 13:40:21.512566 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 30 13:40:21.512649 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:40:21.512669 kernel: SELinux: policy capability open_perms=1 Jan 30 13:40:21.512684 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:40:21.512695 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:40:21.512706 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:40:21.512717 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:40:21.512728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:40:21.512739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:40:21.512750 kernel: audit: type=1403 audit(1738244420.798:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:40:21.512767 systemd[1]: Successfully loaded SELinux policy in 39.496ms. Jan 30 13:40:21.512788 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.799ms. Jan 30 13:40:21.512808 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:40:21.512820 systemd[1]: Detected virtualization kvm. Jan 30 13:40:21.512832 systemd[1]: Detected architecture x86-64. Jan 30 13:40:21.512844 systemd[1]: Detected first boot. Jan 30 13:40:21.512855 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:40:21.512869 zram_generator::config[1058]: No configuration found. Jan 30 13:40:21.512882 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:40:21.512899 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:40:21.512919 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:40:21.512930 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:40:21.512946 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:40:21.512958 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:40:21.512970 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:40:21.512981 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:40:21.512994 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:40:21.513006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:40:21.513018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:40:21.513032 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:40:21.513044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:21.513063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:21.513075 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:40:21.513088 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:40:21.513099 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:40:21.513111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:40:21.513129 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:40:21.513141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:21.513158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:40:21.513169 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:40:21.513182 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:40:21.513193 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:40:21.513205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:21.513217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:40:21.513229 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:40:21.513241 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:40:21.513255 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:40:21.513267 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:40:21.513278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:21.513290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:21.513302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:21.513314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:40:21.513326 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:40:21.513338 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:40:21.513349 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:40:21.513364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:21.513376 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:40:21.513387 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:40:21.513399 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:40:21.513412 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:40:21.513424 systemd[1]: Reached target machines.target - Containers. Jan 30 13:40:21.513436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:40:21.513448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:40:21.513462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:40:21.513474 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:40:21.513486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:40:21.513498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:40:21.513509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:40:21.513521 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:40:21.513533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:40:21.513544 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:40:21.513558 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:40:21.513570 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:40:21.513582 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:40:21.513614 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:40:21.513626 kernel: loop: module loaded Jan 30 13:40:21.513637 kernel: fuse: init (API version 7.39) Jan 30 13:40:21.513648 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:40:21.513660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:40:21.513671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:40:21.513688 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:40:21.513700 kernel: ACPI: bus type drm_connector registered Jan 30 13:40:21.513711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:40:21.513723 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:40:21.513735 systemd[1]: Stopped verity-setup.service. Jan 30 13:40:21.513747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:21.513774 systemd-journald[1132]: Collecting audit messages is disabled. Jan 30 13:40:21.513796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:40:21.513810 systemd-journald[1132]: Journal started Jan 30 13:40:21.513832 systemd-journald[1132]: Runtime Journal (/run/log/journal/4999ef40674f40078004c2e98e91a209) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:40:21.292459 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:40:21.311965 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:40:21.312439 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:40:21.516199 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:40:21.516951 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:40:21.518261 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:40:21.519416 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:40:21.520685 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:40:21.521936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:40:21.523198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:40:21.524678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:21.526295 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:40:21.526469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:40:21.527981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:40:21.528158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:40:21.529676 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:40:21.529847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:40:21.531252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:40:21.531419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:40:21.533153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:40:21.533319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:40:21.534891 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:40:21.535065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:40:21.536466 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:21.538060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:40:21.539616 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:40:21.552847 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:40:21.568721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:40:21.570921 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:40:21.572042 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:40:21.572078 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:40:21.574002 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:40:21.576247 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:40:21.582301 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:40:21.583528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:40:21.586400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:40:21.590762 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:40:21.591958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:40:21.593291 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:40:21.594461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:40:21.599793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:40:21.604850 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:40:21.607312 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:40:21.608751 systemd-journald[1132]: Time spent on flushing to /var/log/journal/4999ef40674f40078004c2e98e91a209 is 20.869ms for 996 entries. Jan 30 13:40:21.608751 systemd-journald[1132]: System Journal (/var/log/journal/4999ef40674f40078004c2e98e91a209) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:40:21.640208 systemd-journald[1132]: Received client request to flush runtime journal. Jan 30 13:40:21.611234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:21.612888 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:40:21.613033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:40:21.615816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:40:21.620342 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:40:21.629296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:40:21.641054 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:40:21.644082 kernel: loop0: detected capacity change from 0 to 218376 Jan 30 13:40:21.646075 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:40:21.648155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:40:21.650832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:21.659448 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:40:21.667468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:40:21.668216 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:40:21.669816 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:40:21.672609 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:40:21.680990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:40:21.701951 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 13:40:21.702166 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 13:40:21.704619 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:40:21.709577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:21.746627 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:40:21.779648 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 13:40:21.789631 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:40:21.799695 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:40:21.812358 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:40:21.813322 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 30 13:40:21.817166 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:40:21.817186 systemd[1]: Reloading... Jan 30 13:40:21.875289 zram_generator::config[1222]: No configuration found. Jan 30 13:40:21.899130 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:40:21.994304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:22.043099 systemd[1]: Reloading finished in 225 ms. Jan 30 13:40:22.076347 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:40:22.077893 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:40:22.091772 systemd[1]: Starting ensure-sysext.service... Jan 30 13:40:22.093789 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:40:22.101765 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:40:22.101786 systemd[1]: Reloading... Jan 30 13:40:22.137474 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:40:22.140684 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:40:22.141715 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:40:22.142007 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:40:22.142095 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:40:22.144617 zram_generator::config[1290]: No configuration found. Jan 30 13:40:22.145673 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:40:22.145738 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:40:22.158377 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:40:22.158479 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:40:22.251628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:22.300429 systemd[1]: Reloading finished in 198 ms. Jan 30 13:40:22.319753 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:40:22.321404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:22.342762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:40:22.345300 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:40:22.347644 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:40:22.352850 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:40:22.356758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:22.360133 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:40:22.367156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:22.367324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:40:22.370671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:40:22.373906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:40:22.377199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:40:22.378449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:40:22.378551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:22.382188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:40:22.383633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:40:22.384662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:40:22.390289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:40:22.390488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:40:22.401631 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 30 13:40:22.402377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:40:22.405708 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:40:22.405927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:40:22.408063 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:40:22.414633 systemd[1]: Finished ensure-sysext.service. Jan 30 13:40:22.415727 augenrules[1354]: No rules Jan 30 13:40:22.418683 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:40:22.421343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:22.421777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:40:22.428935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:40:22.432209 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:40:22.446801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:40:22.448382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:40:22.453778 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:40:22.457863 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:40:22.459657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:22.460056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:22.463962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:40:22.465582 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:40:22.467312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:40:22.467492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:40:22.469002 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:40:22.469187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:40:22.472504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:40:22.472948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:40:22.475102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:40:22.489622 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Jan 30 13:40:22.503413 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:40:22.529012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:40:22.530256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:40:22.530339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:40:22.530361 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:40:22.541625 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:40:22.553723 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:40:22.562332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:40:22.568067 systemd-resolved[1330]: Positive Trust Anchors: Jan 30 13:40:22.568100 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:40:22.568142 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:40:22.575804 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:40:22.578034 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 30 13:40:22.579997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:40:22.581494 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:22.591626 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:40:22.622886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:22.624504 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:40:22.625062 systemd-networkd[1403]: lo: Link UP Jan 30 13:40:22.625066 systemd-networkd[1403]: lo: Gained carrier Jan 30 13:40:22.626822 systemd-networkd[1403]: Enumeration completed Jan 30 13:40:22.627706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:40:22.628721 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:22.628725 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:40:22.631126 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:40:22.631203 systemd-networkd[1403]: eth0: Link UP Jan 30 13:40:22.631207 systemd-networkd[1403]: eth0: Gained carrier Jan 30 13:40:22.631219 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:22.635229 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:40:22.637276 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:40:22.637446 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:40:22.638708 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:40:22.634791 systemd[1]: Reached target network.target - Network. Jan 30 13:40:22.637440 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:40:22.640644 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:40:22.643915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:40:22.646652 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:40:22.647948 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 13:40:23.522716 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:40:23.522760 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2025-01-30 13:40:23.522632 UTC. Jan 30 13:40:23.523528 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 30 13:40:23.526049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:23.526328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:23.532686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:23.585007 kernel: kvm_amd: TSC scaling supported Jan 30 13:40:23.585078 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:40:23.585092 kernel: kvm_amd: Nested Paging enabled Jan 30 13:40:23.585134 kernel: kvm_amd: LBR virtualization supported Jan 30 13:40:23.585673 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:40:23.586824 kernel: kvm_amd: Virtual GIF supported Jan 30 13:40:23.606541 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:40:23.619292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:23.640903 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:40:23.653705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:40:23.663751 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:40:23.694983 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:40:23.696556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:23.697683 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:40:23.698847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:40:23.700118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:40:23.701562 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:40:23.702740 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:40:23.703991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:40:23.705224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:40:23.705257 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:40:23.706149 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:40:23.707665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:40:23.710338 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:40:23.721194 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:40:23.723502 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:40:23.725082 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:40:23.726232 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:40:23.727198 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:40:23.728158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:40:23.728186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:40:23.729197 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:40:23.731249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:40:23.734793 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:40:23.737468 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:40:23.738541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:40:23.739773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:40:23.741463 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:40:23.744659 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:40:23.748376 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:40:23.754087 jq[1435]: false Jan 30 13:40:23.759650 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:40:23.764357 extend-filesystems[1436]: Found loop3 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found loop4 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found loop5 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found sr0 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda1 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda2 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda3 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found usr Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda4 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda6 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda7 Jan 30 13:40:23.765414 extend-filesystems[1436]: Found vda9 Jan 30 13:40:23.765414 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 30 13:40:23.769399 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:40:23.768850 dbus-daemon[1434]: [system] SELinux support is enabled Jan 30 13:40:23.771482 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:40:23.773210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:40:23.780603 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 30 13:40:23.782366 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:40:23.793417 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:40:23.793468 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1374) Jan 30 13:40:23.784691 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:40:23.787588 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:40:23.794060 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:40:23.797674 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:40:23.805754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:40:23.805963 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:40:23.806276 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:40:23.806469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:40:23.811187 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:40:23.811392 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:40:23.814287 jq[1457]: true Jan 30 13:40:23.826210 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:40:23.831292 jq[1461]: true Jan 30 13:40:23.834522 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:40:23.858265 update_engine[1453]: I20250130 13:40:23.829447 1453 main.cc:92] Flatcar Update Engine starting Jan 30 13:40:23.858265 update_engine[1453]: I20250130 13:40:23.839038 1453 update_check_scheduler.cc:74] Next update check in 2m23s Jan 30 13:40:23.857921 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:40:23.858600 tar[1460]: linux-amd64/LICENSE Jan 30 13:40:23.859340 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:40:23.859340 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:40:23.859340 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:40:23.864967 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 30 13:40:23.865905 tar[1460]: linux-amd64/helm Jan 30 13:40:23.859517 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:40:23.859541 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:40:23.860282 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:40:23.860302 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:40:23.860757 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:40:23.860773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:40:23.862013 systemd-logind[1449]: New seat seat0. Jan 30 13:40:23.869685 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:40:23.877010 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:40:23.878397 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:40:23.878616 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:40:23.891989 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:40:23.893463 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:40:23.897062 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:40:23.896666 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:40:23.907203 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:40:23.920265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:40:23.933742 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:40:23.940336 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:40:23.940603 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:40:23.945490 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:40:23.959779 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:40:23.967829 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:40:23.970356 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:40:23.971711 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:40:24.031434 containerd[1462]: time="2025-01-30T13:40:24.031241661Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:40:24.054625 containerd[1462]: time="2025-01-30T13:40:24.054581663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056344 containerd[1462]: time="2025-01-30T13:40:24.056277032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056344 containerd[1462]: time="2025-01-30T13:40:24.056304874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:40:24.056344 containerd[1462]: time="2025-01-30T13:40:24.056319652Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:40:24.056499 containerd[1462]: time="2025-01-30T13:40:24.056471817Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:40:24.056499 containerd[1462]: time="2025-01-30T13:40:24.056493989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056620 containerd[1462]: time="2025-01-30T13:40:24.056572927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056620 containerd[1462]: time="2025-01-30T13:40:24.056591321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056788 containerd[1462]: time="2025-01-30T13:40:24.056761671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056788 containerd[1462]: time="2025-01-30T13:40:24.056779905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056829 containerd[1462]: time="2025-01-30T13:40:24.056814339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056829 containerd[1462]: time="2025-01-30T13:40:24.056826061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.056947 containerd[1462]: time="2025-01-30T13:40:24.056918565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.057193 containerd[1462]: time="2025-01-30T13:40:24.057172832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:24.057323 containerd[1462]: time="2025-01-30T13:40:24.057299760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:24.057323 containerd[1462]: time="2025-01-30T13:40:24.057314778Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:40:24.057436 containerd[1462]: time="2025-01-30T13:40:24.057406239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:40:24.057535 containerd[1462]: time="2025-01-30T13:40:24.057469047Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:40:24.063637 containerd[1462]: time="2025-01-30T13:40:24.063608268Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:40:24.063672 containerd[1462]: time="2025-01-30T13:40:24.063655987Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:40:24.063702 containerd[1462]: time="2025-01-30T13:40:24.063674141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:40:24.063702 containerd[1462]: time="2025-01-30T13:40:24.063690863Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:40:24.063750 containerd[1462]: time="2025-01-30T13:40:24.063704148Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:40:24.063858 containerd[1462]: time="2025-01-30T13:40:24.063829753Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:40:24.064082 containerd[1462]: time="2025-01-30T13:40:24.064063041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:40:24.064186 containerd[1462]: time="2025-01-30T13:40:24.064168839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:40:24.064186 containerd[1462]: time="2025-01-30T13:40:24.064187985Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:40:24.064235 containerd[1462]: time="2025-01-30T13:40:24.064204656Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:40:24.064235 containerd[1462]: time="2025-01-30T13:40:24.064217901Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064235 containerd[1462]: time="2025-01-30T13:40:24.064231196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064293 containerd[1462]: time="2025-01-30T13:40:24.064244090Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064293 containerd[1462]: time="2025-01-30T13:40:24.064257986Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064293 containerd[1462]: time="2025-01-30T13:40:24.064272023Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064293 containerd[1462]: time="2025-01-30T13:40:24.064290748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064360 containerd[1462]: time="2025-01-30T13:40:24.064303482Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064360 containerd[1462]: time="2025-01-30T13:40:24.064315133Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:40:24.064360 containerd[1462]: time="2025-01-30T13:40:24.064333728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064360 containerd[1462]: time="2025-01-30T13:40:24.064355800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064368904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064381468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064394212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064407687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064419419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064439 containerd[1462]: time="2025-01-30T13:40:24.064431722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064443654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064457811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064469463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064482728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064495061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064528373Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064547148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064560 containerd[1462]: time="2025-01-30T13:40:24.064559151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.064735 containerd[1462]: time="2025-01-30T13:40:24.064570292Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:40:24.064803 containerd[1462]: time="2025-01-30T13:40:24.064780536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064850417Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064902956Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064921510Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064942399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064960644Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.064980641Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:40:24.065383 containerd[1462]: time="2025-01-30T13:40:24.065007502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:40:24.065537 containerd[1462]: time="2025-01-30T13:40:24.065301833Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:40:24.065537 containerd[1462]: time="2025-01-30T13:40:24.065360203Z" level=info msg="Connect containerd service" Jan 30 13:40:24.065703 containerd[1462]: time="2025-01-30T13:40:24.065556621Z" level=info msg="using legacy CRI server" Jan 30 13:40:24.065703 containerd[1462]: time="2025-01-30T13:40:24.065593771Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:40:24.065774 containerd[1462]: time="2025-01-30T13:40:24.065745455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:40:24.066612 containerd[1462]: time="2025-01-30T13:40:24.066580070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:40:24.066769 containerd[1462]: time="2025-01-30T13:40:24.066725964Z" level=info msg="Start subscribing containerd event" Jan 30 13:40:24.066795 containerd[1462]: time="2025-01-30T13:40:24.066786046Z" level=info msg="Start recovering state" Jan 30 13:40:24.066954 containerd[1462]: time="2025-01-30T13:40:24.066911982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:40:24.067041 containerd[1462]: time="2025-01-30T13:40:24.067015236Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:40:24.069558 containerd[1462]: time="2025-01-30T13:40:24.069535972Z" level=info msg="Start event monitor" Jan 30 13:40:24.069603 containerd[1462]: time="2025-01-30T13:40:24.069572230Z" level=info msg="Start snapshots syncer" Jan 30 13:40:24.069603 containerd[1462]: time="2025-01-30T13:40:24.069583001Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:40:24.069603 containerd[1462]: time="2025-01-30T13:40:24.069591396Z" level=info msg="Start streaming server" Jan 30 13:40:24.069756 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:40:24.070939 containerd[1462]: time="2025-01-30T13:40:24.070913215Z" level=info msg="containerd successfully booted in 0.040752s" Jan 30 13:40:24.261576 tar[1460]: linux-amd64/README.md Jan 30 13:40:24.281044 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:40:24.990726 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 30 13:40:24.994058 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:40:24.996006 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:40:25.009722 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:40:25.012267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:25.014958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:40:25.037769 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:40:25.039619 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:40:25.039873 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:40:25.042322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:40:25.694406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:25.696164 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:40:25.700075 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:25.700753 systemd[1]: Startup finished in 685ms (kernel) + 5.101s (initrd) + 4.065s (userspace) = 9.853s. Jan 30 13:40:26.087693 kubelet[1548]: E0130 13:40:26.087563 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:26.091549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:26.091775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:33.733648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:40:33.734883 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:34294.service - OpenSSH per-connection server daemon (10.0.0.1:34294). Jan 30 13:40:33.780114 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 34294 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:33.782139 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:33.790319 systemd-logind[1449]: New session 1 of user core. Jan 30 13:40:33.791569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:40:33.804720 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:40:33.816021 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:40:33.829772 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:40:33.832562 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:40:33.939828 systemd[1565]: Queued start job for default target default.target. Jan 30 13:40:33.950746 systemd[1565]: Created slice app.slice - User Application Slice. Jan 30 13:40:33.950773 systemd[1565]: Reached target paths.target - Paths. Jan 30 13:40:33.950787 systemd[1565]: Reached target timers.target - Timers. Jan 30 13:40:33.952221 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:40:33.963468 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:40:33.963604 systemd[1565]: Reached target sockets.target - Sockets. Jan 30 13:40:33.963623 systemd[1565]: Reached target basic.target - Basic System. Jan 30 13:40:33.963659 systemd[1565]: Reached target default.target - Main User Target. Jan 30 13:40:33.963690 systemd[1565]: Startup finished in 124ms. Jan 30 13:40:33.964182 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:40:33.965679 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:40:34.025482 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:34306.service - OpenSSH per-connection server daemon (10.0.0.1:34306). Jan 30 13:40:34.060660 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 34306 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.062157 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.066279 systemd-logind[1449]: New session 2 of user core. Jan 30 13:40:34.072642 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:40:34.125958 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.141318 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:34306.service: Deactivated successfully. Jan 30 13:40:34.143083 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:40:34.144733 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:40:34.158751 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:34316.service - OpenSSH per-connection server daemon (10.0.0.1:34316). Jan 30 13:40:34.159577 systemd-logind[1449]: Removed session 2. Jan 30 13:40:34.188381 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 34316 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.189873 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.193445 systemd-logind[1449]: New session 3 of user core. Jan 30 13:40:34.199609 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:40:34.248660 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.262281 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:34316.service: Deactivated successfully. Jan 30 13:40:34.264086 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:40:34.265716 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:40:34.266943 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:34328.service - OpenSSH per-connection server daemon (10.0.0.1:34328). Jan 30 13:40:34.267673 systemd-logind[1449]: Removed session 3. Jan 30 13:40:34.302035 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 34328 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.303458 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.307048 systemd-logind[1449]: New session 4 of user core. Jan 30 13:40:34.317719 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:40:34.372386 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.384352 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:34328.service: Deactivated successfully. Jan 30 13:40:34.386096 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:40:34.387731 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:40:34.398872 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:34344.service - OpenSSH per-connection server daemon (10.0.0.1:34344). Jan 30 13:40:34.399800 systemd-logind[1449]: Removed session 4. Jan 30 13:40:34.428928 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 34344 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.430369 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.434206 systemd-logind[1449]: New session 5 of user core. Jan 30 13:40:34.443663 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:40:34.623472 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:40:34.623842 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:34.640957 sudo[1600]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:34.642883 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.660328 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:34344.service: Deactivated successfully. Jan 30 13:40:34.662088 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:40:34.663750 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:40:34.678849 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:34348.service - OpenSSH per-connection server daemon (10.0.0.1:34348). Jan 30 13:40:34.679907 systemd-logind[1449]: Removed session 5. Jan 30 13:40:34.709436 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 34348 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.710954 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.714625 systemd-logind[1449]: New session 6 of user core. Jan 30 13:40:34.724672 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:40:34.777597 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:40:34.777943 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:34.781843 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:34.788130 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:40:34.788470 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:34.803730 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:40:34.805341 auditctl[1612]: No rules Jan 30 13:40:34.806701 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:40:34.806962 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:40:34.808685 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:40:34.838809 augenrules[1630]: No rules Jan 30 13:40:34.840728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:40:34.842074 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:34.844017 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.856392 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:34348.service: Deactivated successfully. Jan 30 13:40:34.858258 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:40:34.859927 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:40:34.868740 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:34360.service - OpenSSH per-connection server daemon (10.0.0.1:34360). Jan 30 13:40:34.869646 systemd-logind[1449]: Removed session 6. Jan 30 13:40:34.899816 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 34360 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:40:34.901346 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:34.905398 systemd-logind[1449]: New session 7 of user core. Jan 30 13:40:34.914624 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:40:34.967144 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:40:34.967484 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:35.247741 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:40:35.247904 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:40:35.515790 dockerd[1659]: time="2025-01-30T13:40:35.515621660Z" level=info msg="Starting up" Jan 30 13:40:35.620586 dockerd[1659]: time="2025-01-30T13:40:35.620327944Z" level=info msg="Loading containers: start." Jan 30 13:40:35.731538 kernel: Initializing XFRM netlink socket Jan 30 13:40:35.809417 systemd-networkd[1403]: docker0: Link UP Jan 30 13:40:35.835976 dockerd[1659]: time="2025-01-30T13:40:35.835945467Z" level=info msg="Loading containers: done." Jan 30 13:40:35.849459 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4048010189-merged.mount: Deactivated successfully. Jan 30 13:40:35.851242 dockerd[1659]: time="2025-01-30T13:40:35.851201061Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:40:35.851314 dockerd[1659]: time="2025-01-30T13:40:35.851294897Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:40:35.851423 dockerd[1659]: time="2025-01-30T13:40:35.851400165Z" level=info msg="Daemon has completed initialization" Jan 30 13:40:35.887873 dockerd[1659]: time="2025-01-30T13:40:35.887801688Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:40:35.889051 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:40:36.341917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:40:36.351655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:36.377924 containerd[1462]: time="2025-01-30T13:40:36.377884352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:40:36.516760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:36.521005 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:36.627530 kubelet[1815]: E0130 13:40:36.627394 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:36.634080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:36.634284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:37.225526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990443555.mount: Deactivated successfully. Jan 30 13:40:38.226875 containerd[1462]: time="2025-01-30T13:40:38.226820878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:38.227550 containerd[1462]: time="2025-01-30T13:40:38.227482809Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:40:38.228790 containerd[1462]: time="2025-01-30T13:40:38.228754564Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:38.231711 containerd[1462]: time="2025-01-30T13:40:38.231658779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:38.232706 containerd[1462]: time="2025-01-30T13:40:38.232669564Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.854748983s" Jan 30 13:40:38.232706 containerd[1462]: time="2025-01-30T13:40:38.232705051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:40:38.233250 containerd[1462]: time="2025-01-30T13:40:38.233218263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:40:39.572181 containerd[1462]: time="2025-01-30T13:40:39.572122147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:39.572946 containerd[1462]: time="2025-01-30T13:40:39.572909163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:40:39.574248 containerd[1462]: time="2025-01-30T13:40:39.574215933Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:39.576866 containerd[1462]: time="2025-01-30T13:40:39.576811199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:39.577861 containerd[1462]: time="2025-01-30T13:40:39.577829930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.344577643s" Jan 30 13:40:39.577905 containerd[1462]: time="2025-01-30T13:40:39.577861068Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:40:39.578483 containerd[1462]: time="2025-01-30T13:40:39.578331851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:40:41.254094 containerd[1462]: time="2025-01-30T13:40:41.254038761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:41.255020 containerd[1462]: time="2025-01-30T13:40:41.254984344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:40:41.256348 containerd[1462]: time="2025-01-30T13:40:41.256317433Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:41.259227 containerd[1462]: time="2025-01-30T13:40:41.259183677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:41.260373 containerd[1462]: time="2025-01-30T13:40:41.260330047Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.681960205s" Jan 30 13:40:41.260373 containerd[1462]: time="2025-01-30T13:40:41.260364602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:40:41.260803 containerd[1462]: time="2025-01-30T13:40:41.260774129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:40:42.484647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811475938.mount: Deactivated successfully. Jan 30 13:40:42.776926 containerd[1462]: time="2025-01-30T13:40:42.776774620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:42.777847 containerd[1462]: time="2025-01-30T13:40:42.777782780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:40:42.779039 containerd[1462]: time="2025-01-30T13:40:42.779006455Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:42.781062 containerd[1462]: time="2025-01-30T13:40:42.781032153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:42.781713 containerd[1462]: time="2025-01-30T13:40:42.781668245Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.520858809s" Jan 30 13:40:42.781740 containerd[1462]: time="2025-01-30T13:40:42.781713029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:40:42.782152 containerd[1462]: time="2025-01-30T13:40:42.782132326Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:40:43.364372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767578494.mount: Deactivated successfully. Jan 30 13:40:44.325662 containerd[1462]: time="2025-01-30T13:40:44.325601911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.326406 containerd[1462]: time="2025-01-30T13:40:44.326375792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:40:44.327500 containerd[1462]: time="2025-01-30T13:40:44.327474672Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.330361 containerd[1462]: time="2025-01-30T13:40:44.330334344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.331548 containerd[1462]: time="2025-01-30T13:40:44.331488398Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.549329142s" Jan 30 13:40:44.331585 containerd[1462]: time="2025-01-30T13:40:44.331547749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:40:44.332060 containerd[1462]: time="2025-01-30T13:40:44.332020916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:40:44.822460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011047839.mount: Deactivated successfully. Jan 30 13:40:44.829265 containerd[1462]: time="2025-01-30T13:40:44.829205147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.830089 containerd[1462]: time="2025-01-30T13:40:44.830040994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:40:44.831180 containerd[1462]: time="2025-01-30T13:40:44.831138181Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.833479 containerd[1462]: time="2025-01-30T13:40:44.833426692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:44.834210 containerd[1462]: time="2025-01-30T13:40:44.834166108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.114785ms" Jan 30 13:40:44.834210 containerd[1462]: time="2025-01-30T13:40:44.834201274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:40:44.834688 containerd[1462]: time="2025-01-30T13:40:44.834666587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:40:45.644481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779807558.mount: Deactivated successfully. Jan 30 13:40:46.885062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:40:46.894670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:47.074090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:47.077689 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:47.277408 kubelet[2011]: E0130 13:40:47.277212 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:47.281607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:47.281813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:47.855479 containerd[1462]: time="2025-01-30T13:40:47.855423996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:47.856379 containerd[1462]: time="2025-01-30T13:40:47.856287916Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:40:47.857750 containerd[1462]: time="2025-01-30T13:40:47.857714882Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:47.860805 containerd[1462]: time="2025-01-30T13:40:47.860774007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:40:47.861968 containerd[1462]: time="2025-01-30T13:40:47.861927590Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.027164052s" Jan 30 13:40:47.862029 containerd[1462]: time="2025-01-30T13:40:47.861969168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:40:49.942744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:49.953723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:49.977752 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Jan 30 13:40:49.977768 systemd[1]: Reloading... Jan 30 13:40:50.059550 zram_generator::config[2094]: No configuration found. Jan 30 13:40:50.230544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:50.306899 systemd[1]: Reloading finished in 328 ms. Jan 30 13:40:50.355584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:40:50.355676 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:40:50.355943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:50.358461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:50.516894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:50.521148 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:40:50.557226 kubelet[2140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:40:50.557226 kubelet[2140]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:40:50.557226 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:40:50.557595 kubelet[2140]: I0130 13:40:50.557281 2140 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:40:50.802524 kubelet[2140]: I0130 13:40:50.802386 2140 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:40:50.802524 kubelet[2140]: I0130 13:40:50.802414 2140 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:40:50.802822 kubelet[2140]: I0130 13:40:50.802671 2140 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:40:50.821155 kubelet[2140]: E0130 13:40:50.821098 2140 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:50.822400 kubelet[2140]: I0130 13:40:50.822368 2140 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:40:50.831431 kubelet[2140]: E0130 13:40:50.831389 2140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:40:50.831431 kubelet[2140]: I0130 13:40:50.831418 2140 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:40:50.836586 kubelet[2140]: I0130 13:40:50.836565 2140 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:40:50.837696 kubelet[2140]: I0130 13:40:50.837659 2140 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:40:50.837853 kubelet[2140]: I0130 13:40:50.837687 2140 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:40:50.837853 kubelet[2140]: I0130 13:40:50.837851 2140 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:40:50.837966 kubelet[2140]: I0130 13:40:50.837860 2140 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:40:50.837991 kubelet[2140]: I0130 13:40:50.837986 2140 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:40:50.841447 kubelet[2140]: I0130 13:40:50.841401 2140 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:40:50.841447 kubelet[2140]: I0130 13:40:50.841443 2140 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:40:50.841533 kubelet[2140]: I0130 13:40:50.841468 2140 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:40:50.841533 kubelet[2140]: I0130 13:40:50.841483 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:40:50.844677 kubelet[2140]: I0130 13:40:50.844652 2140 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:40:50.845185 kubelet[2140]: I0130 13:40:50.845031 2140 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:40:50.845185 kubelet[2140]: W0130 13:40:50.845055 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:50.845185 kubelet[2140]: W0130 13:40:50.845102 2140 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:40:50.845185 kubelet[2140]: E0130 13:40:50.845104 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:50.845803 kubelet[2140]: W0130 13:40:50.845752 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:50.845803 kubelet[2140]: E0130 13:40:50.845796 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:50.847184 kubelet[2140]: I0130 13:40:50.847158 2140 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:40:50.847232 kubelet[2140]: I0130 13:40:50.847199 2140 server.go:1287] "Started kubelet" Jan 30 13:40:50.850405 kubelet[2140]: I0130 13:40:50.849854 2140 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:40:50.850405 kubelet[2140]: I0130 13:40:50.849850 2140 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:40:50.850405 kubelet[2140]: I0130 13:40:50.850223 2140 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:40:50.850633 kubelet[2140]: I0130 13:40:50.850550 2140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:40:50.850633 kubelet[2140]: I0130 13:40:50.850579 2140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:40:50.851638 kubelet[2140]: I0130 13:40:50.850845 2140 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:40:50.851638 kubelet[2140]: I0130 13:40:50.850893 2140 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:40:50.851724 kubelet[2140]: E0130 13:40:50.851654 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:40:50.852928 kubelet[2140]: W0130 13:40:50.852364 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:50.852928 kubelet[2140]: E0130 13:40:50.852408 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:50.852928 kubelet[2140]: I0130 13:40:50.852718 2140 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:40:50.852928 kubelet[2140]: I0130 13:40:50.852784 2140 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:40:50.852928 kubelet[2140]: I0130 13:40:50.852822 2140 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:40:50.854462 kubelet[2140]: I0130 13:40:50.854436 2140 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:40:50.854978 kubelet[2140]: E0130 13:40:50.851484 2140 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c1e518a2c54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:40:50.847173716 +0000 UTC m=+0.322276409,LastTimestamp:2025-01-30 13:40:50.847173716 +0000 UTC m=+0.322276409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:40:50.855666 kubelet[2140]: E0130 13:40:50.855633 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jan 30 13:40:50.855666 kubelet[2140]: E0130 13:40:50.855653 2140 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:40:50.856080 kubelet[2140]: I0130 13:40:50.856045 2140 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:40:50.869792 kubelet[2140]: I0130 13:40:50.869752 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:40:50.870792 kubelet[2140]: I0130 13:40:50.870761 2140 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:40:50.870792 kubelet[2140]: I0130 13:40:50.870783 2140 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:40:50.870861 kubelet[2140]: I0130 13:40:50.870800 2140 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:40:50.871434 kubelet[2140]: I0130 13:40:50.871409 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:40:50.871468 kubelet[2140]: I0130 13:40:50.871436 2140 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:40:50.871468 kubelet[2140]: I0130 13:40:50.871463 2140 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:40:50.871649 kubelet[2140]: I0130 13:40:50.871473 2140 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:40:50.871649 kubelet[2140]: E0130 13:40:50.871534 2140 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:40:50.952353 kubelet[2140]: E0130 13:40:50.952292 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:40:50.971655 kubelet[2140]: E0130 13:40:50.971616 2140 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:40:51.052493 kubelet[2140]: E0130 13:40:51.052461 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:40:51.056110 kubelet[2140]: E0130 13:40:51.056020 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jan 30 13:40:51.141769 kubelet[2140]: I0130 13:40:51.141723 2140 policy_none.go:49] "None policy: Start" Jan 30 13:40:51.141769 kubelet[2140]: I0130 13:40:51.141754 2140 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:40:51.141769 kubelet[2140]: I0130 13:40:51.141767 2140 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:40:51.142018 kubelet[2140]: W0130 13:40:51.141885 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:51.142018 kubelet[2140]: E0130 13:40:51.141938 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:51.149173 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:40:51.152841 kubelet[2140]: E0130 13:40:51.152804 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:40:51.162897 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:40:51.166231 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:40:51.172056 kubelet[2140]: E0130 13:40:51.172021 2140 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:40:51.172468 kubelet[2140]: I0130 13:40:51.172442 2140 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:40:51.172711 kubelet[2140]: I0130 13:40:51.172685 2140 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:40:51.172799 kubelet[2140]: I0130 13:40:51.172705 2140 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:40:51.172956 kubelet[2140]: I0130 13:40:51.172931 2140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:40:51.173958 kubelet[2140]: E0130 13:40:51.173926 2140 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:40:51.173958 kubelet[2140]: E0130 13:40:51.173960 2140 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:40:51.274244 kubelet[2140]: I0130 13:40:51.274214 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:51.274644 kubelet[2140]: E0130 13:40:51.274614 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 13:40:51.457753 kubelet[2140]: E0130 13:40:51.457591 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jan 30 13:40:51.476675 kubelet[2140]: I0130 13:40:51.476649 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:51.476954 kubelet[2140]: E0130 13:40:51.476916 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 13:40:51.579995 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:40:51.602491 kubelet[2140]: E0130 13:40:51.602454 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:51.605370 systemd[1]: Created slice kubepods-burstable-pod6181d5a98d50fc8b2b2cdf9bdbbb7871.slice - libcontainer container kubepods-burstable-pod6181d5a98d50fc8b2b2cdf9bdbbb7871.slice. Jan 30 13:40:51.615729 kubelet[2140]: E0130 13:40:51.615696 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:51.618427 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:40:51.619995 kubelet[2140]: E0130 13:40:51.619974 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:51.657553 kubelet[2140]: I0130 13:40:51.657468 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:51.657619 kubelet[2140]: I0130 13:40:51.657595 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:51.657654 kubelet[2140]: I0130 13:40:51.657618 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:51.657654 kubelet[2140]: I0130 13:40:51.657641 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:51.657701 kubelet[2140]: I0130 13:40:51.657658 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:51.657701 kubelet[2140]: I0130 13:40:51.657676 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:51.657701 kubelet[2140]: I0130 13:40:51.657693 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:51.657778 kubelet[2140]: I0130 13:40:51.657709 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:51.657778 kubelet[2140]: I0130 13:40:51.657727 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:51.800972 kubelet[2140]: W0130 13:40:51.800812 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:51.800972 kubelet[2140]: E0130 13:40:51.800887 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:51.830823 kubelet[2140]: W0130 13:40:51.830784 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:51.830823 kubelet[2140]: E0130 13:40:51.830819 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:51.878369 kubelet[2140]: I0130 13:40:51.878332 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:51.878719 kubelet[2140]: E0130 13:40:51.878682 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 13:40:51.903909 kubelet[2140]: E0130 13:40:51.903883 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:51.904499 containerd[1462]: time="2025-01-30T13:40:51.904456371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:40:51.916629 kubelet[2140]: E0130 13:40:51.916609 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:51.917002 containerd[1462]: time="2025-01-30T13:40:51.916968391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6181d5a98d50fc8b2b2cdf9bdbbb7871,Namespace:kube-system,Attempt:0,}" Jan 30 13:40:51.921188 kubelet[2140]: E0130 13:40:51.921153 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:51.921441 containerd[1462]: time="2025-01-30T13:40:51.921408346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:40:52.026643 kubelet[2140]: W0130 13:40:52.026576 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:52.026771 kubelet[2140]: E0130 13:40:52.026651 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:52.258164 kubelet[2140]: E0130 13:40:52.258031 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jan 30 13:40:52.336707 kubelet[2140]: W0130 13:40:52.336672 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 30 13:40:52.336799 kubelet[2140]: E0130 13:40:52.336716 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:52.463122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265728481.mount: Deactivated successfully. Jan 30 13:40:52.470158 containerd[1462]: time="2025-01-30T13:40:52.470117105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:40:52.472044 containerd[1462]: time="2025-01-30T13:40:52.472011837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:40:52.472897 containerd[1462]: time="2025-01-30T13:40:52.472872040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:40:52.475555 containerd[1462]: time="2025-01-30T13:40:52.475522379Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:40:52.476547 containerd[1462]: time="2025-01-30T13:40:52.476517024Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:40:52.477480 containerd[1462]: time="2025-01-30T13:40:52.477448411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:40:52.478545 containerd[1462]: time="2025-01-30T13:40:52.478515872Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:40:52.480235 containerd[1462]: time="2025-01-30T13:40:52.480203587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:40:52.482117 containerd[1462]: time="2025-01-30T13:40:52.482085165Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.538324ms" Jan 30 13:40:52.482868 containerd[1462]: time="2025-01-30T13:40:52.482844538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.390547ms" Jan 30 13:40:52.485707 containerd[1462]: time="2025-01-30T13:40:52.485678983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.632876ms" Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629151945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629202930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629216536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629299331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629364283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629412533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629487284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.629748 containerd[1462]: time="2025-01-30T13:40:52.629645079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.630731 containerd[1462]: time="2025-01-30T13:40:52.630654422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:40:52.630731 containerd[1462]: time="2025-01-30T13:40:52.630707351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:40:52.630825 containerd[1462]: time="2025-01-30T13:40:52.630721969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.630890 containerd[1462]: time="2025-01-30T13:40:52.630813600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:40:52.656647 systemd[1]: Started cri-containerd-0f8bf4cca430267c0688229247ce9e1f259cdda8a24d5bcb3e3c360e4bbe1be3.scope - libcontainer container 0f8bf4cca430267c0688229247ce9e1f259cdda8a24d5bcb3e3c360e4bbe1be3. Jan 30 13:40:52.658100 systemd[1]: Started cri-containerd-91e22d4506fa7d05bfc7b0b7d417eb4889c69db9fbbe404aff5cdd0ce1e058c0.scope - libcontainer container 91e22d4506fa7d05bfc7b0b7d417eb4889c69db9fbbe404aff5cdd0ce1e058c0. Jan 30 13:40:52.661790 systemd[1]: Started cri-containerd-087360fee6fb519071db3a22bcb512e4afaccaad068c0fad4f477f0b64c5696e.scope - libcontainer container 087360fee6fb519071db3a22bcb512e4afaccaad068c0fad4f477f0b64c5696e. Jan 30 13:40:52.681044 kubelet[2140]: I0130 13:40:52.681005 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:52.682303 kubelet[2140]: E0130 13:40:52.681440 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 30 13:40:52.698283 containerd[1462]: time="2025-01-30T13:40:52.698230928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"91e22d4506fa7d05bfc7b0b7d417eb4889c69db9fbbe404aff5cdd0ce1e058c0\"" Jan 30 13:40:52.699754 kubelet[2140]: E0130 13:40:52.699596 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:52.702691 containerd[1462]: time="2025-01-30T13:40:52.702610910Z" level=info msg="CreateContainer within sandbox \"91e22d4506fa7d05bfc7b0b7d417eb4889c69db9fbbe404aff5cdd0ce1e058c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:40:52.703464 containerd[1462]: time="2025-01-30T13:40:52.703440806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6181d5a98d50fc8b2b2cdf9bdbbb7871,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f8bf4cca430267c0688229247ce9e1f259cdda8a24d5bcb3e3c360e4bbe1be3\"" Jan 30 13:40:52.704364 kubelet[2140]: E0130 13:40:52.704337 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:52.708134 containerd[1462]: time="2025-01-30T13:40:52.708032085Z" level=info msg="CreateContainer within sandbox \"0f8bf4cca430267c0688229247ce9e1f259cdda8a24d5bcb3e3c360e4bbe1be3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:40:52.711372 containerd[1462]: time="2025-01-30T13:40:52.711344696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"087360fee6fb519071db3a22bcb512e4afaccaad068c0fad4f477f0b64c5696e\"" Jan 30 13:40:52.712554 kubelet[2140]: E0130 13:40:52.712524 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:52.714099 containerd[1462]: time="2025-01-30T13:40:52.714060208Z" level=info msg="CreateContainer within sandbox \"087360fee6fb519071db3a22bcb512e4afaccaad068c0fad4f477f0b64c5696e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:40:52.865287 kubelet[2140]: E0130 13:40:52.865233 2140 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:40:52.872928 containerd[1462]: time="2025-01-30T13:40:52.872880265Z" level=info msg="CreateContainer within sandbox \"91e22d4506fa7d05bfc7b0b7d417eb4889c69db9fbbe404aff5cdd0ce1e058c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d5f47ebca688d22b7c12d2bede6c1f849e844d0a4a741ee455976e43abfc413\"" Jan 30 13:40:52.873494 containerd[1462]: time="2025-01-30T13:40:52.873470462Z" level=info msg="StartContainer for \"4d5f47ebca688d22b7c12d2bede6c1f849e844d0a4a741ee455976e43abfc413\"" Jan 30 13:40:52.877681 containerd[1462]: time="2025-01-30T13:40:52.877631213Z" level=info msg="CreateContainer within sandbox \"087360fee6fb519071db3a22bcb512e4afaccaad068c0fad4f477f0b64c5696e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b931020f38d420d3dc920ef1ad559054f3ba5f80a08cccfe7799d4fb878094b9\"" Jan 30 13:40:52.877986 containerd[1462]: time="2025-01-30T13:40:52.877963627Z" level=info msg="StartContainer for \"b931020f38d420d3dc920ef1ad559054f3ba5f80a08cccfe7799d4fb878094b9\"" Jan 30 13:40:52.879688 containerd[1462]: time="2025-01-30T13:40:52.879639529Z" level=info msg="CreateContainer within sandbox \"0f8bf4cca430267c0688229247ce9e1f259cdda8a24d5bcb3e3c360e4bbe1be3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cdfb1d630c1d8575082a4dddfc6179eaa1015f629655102bc435c538cd766ca\"" Jan 30 13:40:52.880059 containerd[1462]: time="2025-01-30T13:40:52.879955732Z" level=info msg="StartContainer for \"8cdfb1d630c1d8575082a4dddfc6179eaa1015f629655102bc435c538cd766ca\"" Jan 30 13:40:52.901716 systemd[1]: Started cri-containerd-4d5f47ebca688d22b7c12d2bede6c1f849e844d0a4a741ee455976e43abfc413.scope - libcontainer container 4d5f47ebca688d22b7c12d2bede6c1f849e844d0a4a741ee455976e43abfc413. Jan 30 13:40:52.905957 systemd[1]: Started cri-containerd-8cdfb1d630c1d8575082a4dddfc6179eaa1015f629655102bc435c538cd766ca.scope - libcontainer container 8cdfb1d630c1d8575082a4dddfc6179eaa1015f629655102bc435c538cd766ca. Jan 30 13:40:52.907806 systemd[1]: Started cri-containerd-b931020f38d420d3dc920ef1ad559054f3ba5f80a08cccfe7799d4fb878094b9.scope - libcontainer container b931020f38d420d3dc920ef1ad559054f3ba5f80a08cccfe7799d4fb878094b9. Jan 30 13:40:52.945279 containerd[1462]: time="2025-01-30T13:40:52.945192250Z" level=info msg="StartContainer for \"4d5f47ebca688d22b7c12d2bede6c1f849e844d0a4a741ee455976e43abfc413\" returns successfully" Jan 30 13:40:52.951118 containerd[1462]: time="2025-01-30T13:40:52.950864305Z" level=info msg="StartContainer for \"8cdfb1d630c1d8575082a4dddfc6179eaa1015f629655102bc435c538cd766ca\" returns successfully" Jan 30 13:40:52.956588 containerd[1462]: time="2025-01-30T13:40:52.956455860Z" level=info msg="StartContainer for \"b931020f38d420d3dc920ef1ad559054f3ba5f80a08cccfe7799d4fb878094b9\" returns successfully" Jan 30 13:40:53.887198 kubelet[2140]: E0130 13:40:53.887144 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:53.887638 kubelet[2140]: E0130 13:40:53.887280 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:53.888540 kubelet[2140]: E0130 13:40:53.888517 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:53.888675 kubelet[2140]: E0130 13:40:53.888653 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:53.890366 kubelet[2140]: E0130 13:40:53.890349 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:40:53.890456 kubelet[2140]: E0130 13:40:53.890428 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:54.075765 kubelet[2140]: E0130 13:40:54.075717 2140 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:40:54.283543 kubelet[2140]: I0130 13:40:54.283413 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:54.289077 kubelet[2140]: I0130 13:40:54.289050 2140 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:40:54.289140 kubelet[2140]: E0130 13:40:54.289086 2140 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:40:54.355734 kubelet[2140]: I0130 13:40:54.355696 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:54.359403 kubelet[2140]: E0130 13:40:54.359382 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:54.359403 kubelet[2140]: I0130 13:40:54.359401 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:54.360877 kubelet[2140]: E0130 13:40:54.360845 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:54.360877 kubelet[2140]: I0130 13:40:54.360862 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:54.362053 kubelet[2140]: E0130 13:40:54.362020 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:54.843200 kubelet[2140]: I0130 13:40:54.843161 2140 apiserver.go:52] "Watching apiserver" Jan 30 13:40:54.853461 kubelet[2140]: I0130 13:40:54.853422 2140 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:40:54.891692 kubelet[2140]: I0130 13:40:54.891662 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:54.892092 kubelet[2140]: I0130 13:40:54.891769 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:54.892092 kubelet[2140]: I0130 13:40:54.891906 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:54.893304 kubelet[2140]: E0130 13:40:54.893273 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:54.893304 kubelet[2140]: E0130 13:40:54.893297 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:54.893473 kubelet[2140]: E0130 13:40:54.893410 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:54.893473 kubelet[2140]: E0130 13:40:54.893423 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:54.893831 kubelet[2140]: E0130 13:40:54.893814 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:54.893917 kubelet[2140]: E0130 13:40:54.893902 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:55.893335 kubelet[2140]: I0130 13:40:55.893306 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:55.893335 kubelet[2140]: I0130 13:40:55.893320 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:55.893883 kubelet[2140]: I0130 13:40:55.893359 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:55.899009 kubelet[2140]: E0130 13:40:55.898980 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:55.899882 kubelet[2140]: E0130 13:40:55.899835 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:55.899990 kubelet[2140]: E0130 13:40:55.899959 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:55.956388 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-7.scope)... Jan 30 13:40:55.956404 systemd[1]: Reloading... Jan 30 13:40:56.032591 zram_generator::config[2459]: No configuration found. Jan 30 13:40:56.133747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:56.223165 systemd[1]: Reloading finished in 266 ms. Jan 30 13:40:56.270129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:56.287206 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:40:56.287519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:56.298723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:56.458837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:56.466142 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:40:56.512879 kubelet[2501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:40:56.512879 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:40:56.512879 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:40:56.513237 kubelet[2501]: I0130 13:40:56.512908 2501 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:40:56.520179 kubelet[2501]: I0130 13:40:56.520148 2501 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:40:56.520179 kubelet[2501]: I0130 13:40:56.520176 2501 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:40:56.520518 kubelet[2501]: I0130 13:40:56.520490 2501 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:40:56.521693 kubelet[2501]: I0130 13:40:56.521674 2501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:40:56.524061 kubelet[2501]: I0130 13:40:56.524040 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:40:56.528145 kubelet[2501]: E0130 13:40:56.528096 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:40:56.528145 kubelet[2501]: I0130 13:40:56.528134 2501 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:40:56.532623 kubelet[2501]: I0130 13:40:56.532595 2501 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:40:56.532850 kubelet[2501]: I0130 13:40:56.532818 2501 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:40:56.533002 kubelet[2501]: I0130 13:40:56.532846 2501 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:40:56.533073 kubelet[2501]: I0130 13:40:56.533010 2501 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:40:56.533073 kubelet[2501]: I0130 13:40:56.533018 2501 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:40:56.533073 kubelet[2501]: I0130 13:40:56.533055 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:40:56.533240 kubelet[2501]: I0130 13:40:56.533229 2501 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:40:56.533270 kubelet[2501]: I0130 13:40:56.533242 2501 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:40:56.533270 kubelet[2501]: I0130 13:40:56.533257 2501 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:40:56.533270 kubelet[2501]: I0130 13:40:56.533266 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:40:56.534295 kubelet[2501]: I0130 13:40:56.534261 2501 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:40:56.534974 kubelet[2501]: I0130 13:40:56.534946 2501 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:40:56.535704 kubelet[2501]: I0130 13:40:56.535682 2501 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:40:56.535763 kubelet[2501]: I0130 13:40:56.535722 2501 server.go:1287] "Started kubelet" Jan 30 13:40:56.535937 kubelet[2501]: I0130 13:40:56.535858 2501 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:40:56.536043 kubelet[2501]: I0130 13:40:56.535994 2501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:40:56.538521 kubelet[2501]: I0130 13:40:56.536271 2501 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:40:56.538521 kubelet[2501]: I0130 13:40:56.536844 2501 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:40:56.543280 kubelet[2501]: I0130 13:40:56.543249 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:40:56.544449 kubelet[2501]: E0130 13:40:56.544411 2501 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:40:56.545252 kubelet[2501]: I0130 13:40:56.545225 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:40:56.547152 kubelet[2501]: I0130 13:40:56.547127 2501 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:40:56.548663 kubelet[2501]: I0130 13:40:56.548641 2501 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:40:56.550667 kubelet[2501]: I0130 13:40:56.550653 2501 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:40:56.550797 kubelet[2501]: I0130 13:40:56.550781 2501 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:40:56.551336 kubelet[2501]: I0130 13:40:56.551325 2501 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:40:56.552820 kubelet[2501]: I0130 13:40:56.552792 2501 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:40:56.555846 kubelet[2501]: I0130 13:40:56.555819 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:40:56.557109 kubelet[2501]: I0130 13:40:56.557095 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:40:56.557180 kubelet[2501]: I0130 13:40:56.557171 2501 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:40:56.557264 kubelet[2501]: I0130 13:40:56.557252 2501 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:40:56.557308 kubelet[2501]: I0130 13:40:56.557300 2501 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:40:56.557401 kubelet[2501]: E0130 13:40:56.557384 2501 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:40:56.586459 kubelet[2501]: I0130 13:40:56.586424 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:40:56.586459 kubelet[2501]: I0130 13:40:56.586448 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:40:56.586459 kubelet[2501]: I0130 13:40:56.586467 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:40:56.586708 kubelet[2501]: I0130 13:40:56.586687 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:40:56.586749 kubelet[2501]: I0130 13:40:56.586709 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:40:56.586749 kubelet[2501]: I0130 13:40:56.586732 2501 policy_none.go:49] "None policy: Start" Jan 30 13:40:56.586749 kubelet[2501]: I0130 13:40:56.586743 2501 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:40:56.586813 kubelet[2501]: I0130 13:40:56.586755 2501 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:40:56.586912 kubelet[2501]: I0130 13:40:56.586894 2501 state_mem.go:75] "Updated machine memory state" Jan 30 13:40:56.590906 kubelet[2501]: I0130 13:40:56.590876 2501 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:40:56.591088 kubelet[2501]: I0130 13:40:56.591059 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:40:56.591126 kubelet[2501]: I0130 13:40:56.591074 2501 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:40:56.591534 kubelet[2501]: I0130 13:40:56.591251 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:40:56.592342 kubelet[2501]: E0130 13:40:56.592325 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:40:56.658910 kubelet[2501]: I0130 13:40:56.658879 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.659190 kubelet[2501]: I0130 13:40:56.659070 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:56.659190 kubelet[2501]: I0130 13:40:56.659104 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:56.664661 kubelet[2501]: E0130 13:40:56.664598 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:56.664894 kubelet[2501]: E0130 13:40:56.664870 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:56.665101 kubelet[2501]: E0130 13:40:56.665083 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.698578 kubelet[2501]: I0130 13:40:56.698544 2501 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:40:56.704329 kubelet[2501]: I0130 13:40:56.704275 2501 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:40:56.704447 kubelet[2501]: I0130 13:40:56.704342 2501 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:40:56.752902 kubelet[2501]: I0130 13:40:56.752859 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:56.753005 kubelet[2501]: I0130 13:40:56.752912 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:56.753005 kubelet[2501]: I0130 13:40:56.752935 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.753005 kubelet[2501]: I0130 13:40:56.752954 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.753005 kubelet[2501]: I0130 13:40:56.752973 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:56.753005 kubelet[2501]: I0130 13:40:56.752989 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6181d5a98d50fc8b2b2cdf9bdbbb7871-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6181d5a98d50fc8b2b2cdf9bdbbb7871\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:56.753118 kubelet[2501]: I0130 13:40:56.753005 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.753118 kubelet[2501]: I0130 13:40:56.753022 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.753118 kubelet[2501]: I0130 13:40:56.753048 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:40:56.965366 kubelet[2501]: E0130 13:40:56.965251 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:56.965366 kubelet[2501]: E0130 13:40:56.965251 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:56.965501 kubelet[2501]: E0130 13:40:56.965426 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:57.534300 kubelet[2501]: I0130 13:40:57.533651 2501 apiserver.go:52] "Watching apiserver" Jan 30 13:40:57.550777 kubelet[2501]: I0130 13:40:57.550716 2501 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:40:57.572042 kubelet[2501]: E0130 13:40:57.572000 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:57.572763 kubelet[2501]: I0130 13:40:57.572739 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:57.572962 kubelet[2501]: I0130 13:40:57.572934 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:57.581617 kubelet[2501]: E0130 13:40:57.581581 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:40:57.581929 kubelet[2501]: E0130 13:40:57.581733 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:57.582169 kubelet[2501]: E0130 13:40:57.582023 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:40:57.582169 kubelet[2501]: E0130 13:40:57.582109 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:57.594045 kubelet[2501]: I0130 13:40:57.593921 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5939059159999998 podStartE2EDuration="2.593905916s" podCreationTimestamp="2025-01-30 13:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:40:57.593698489 +0000 UTC m=+1.122870781" watchObservedRunningTime="2025-01-30 13:40:57.593905916 +0000 UTC m=+1.123078208" Jan 30 13:40:57.602146 kubelet[2501]: I0130 13:40:57.602084 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6020501940000003 podStartE2EDuration="2.602050194s" podCreationTimestamp="2025-01-30 13:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:40:57.601821086 +0000 UTC m=+1.130993378" watchObservedRunningTime="2025-01-30 13:40:57.602050194 +0000 UTC m=+1.131222486" Jan 30 13:40:57.611519 kubelet[2501]: I0130 13:40:57.609017 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.608996285 podStartE2EDuration="2.608996285s" podCreationTimestamp="2025-01-30 13:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:40:57.608890463 +0000 UTC m=+1.138062755" watchObservedRunningTime="2025-01-30 13:40:57.608996285 +0000 UTC m=+1.138168578" Jan 30 13:40:58.572678 kubelet[2501]: E0130 13:40:58.572633 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:58.573772 kubelet[2501]: E0130 13:40:58.573729 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:40:59.574113 kubelet[2501]: E0130 13:40:59.574070 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:01.097160 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:01.099322 sshd[1638]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:01.103780 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:34360.service: Deactivated successfully. Jan 30 13:41:01.106139 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:41:01.106412 systemd[1]: session-7.scope: Consumed 4.203s CPU time, 156.5M memory peak, 0B memory swap peak. Jan 30 13:41:01.106885 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:41:01.107775 systemd-logind[1449]: Removed session 7. Jan 30 13:41:02.373057 kubelet[2501]: E0130 13:41:02.373020 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:02.577345 kubelet[2501]: E0130 13:41:02.577306 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:02.856301 kubelet[2501]: I0130 13:41:02.856182 2501 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:41:02.858074 containerd[1462]: time="2025-01-30T13:41:02.858037479Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:41:02.858425 kubelet[2501]: I0130 13:41:02.858359 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:41:03.507279 systemd[1]: Created slice kubepods-besteffort-pod20281408_62ee_4bae_b418_db74a070d3aa.slice - libcontainer container kubepods-besteffort-pod20281408_62ee_4bae_b418_db74a070d3aa.slice. Jan 30 13:41:03.600267 kubelet[2501]: I0130 13:41:03.600214 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20281408-62ee-4bae-b418-db74a070d3aa-lib-modules\") pod \"kube-proxy-npdzz\" (UID: \"20281408-62ee-4bae-b418-db74a070d3aa\") " pod="kube-system/kube-proxy-npdzz" Jan 30 13:41:03.600267 kubelet[2501]: I0130 13:41:03.600255 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2xc\" (UniqueName: \"kubernetes.io/projected/20281408-62ee-4bae-b418-db74a070d3aa-kube-api-access-9z2xc\") pod \"kube-proxy-npdzz\" (UID: \"20281408-62ee-4bae-b418-db74a070d3aa\") " pod="kube-system/kube-proxy-npdzz" Jan 30 13:41:03.600711 kubelet[2501]: I0130 13:41:03.600288 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20281408-62ee-4bae-b418-db74a070d3aa-kube-proxy\") pod \"kube-proxy-npdzz\" (UID: \"20281408-62ee-4bae-b418-db74a070d3aa\") " pod="kube-system/kube-proxy-npdzz" Jan 30 13:41:03.600711 kubelet[2501]: I0130 13:41:03.600303 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20281408-62ee-4bae-b418-db74a070d3aa-xtables-lock\") pod \"kube-proxy-npdzz\" (UID: \"20281408-62ee-4bae-b418-db74a070d3aa\") " pod="kube-system/kube-proxy-npdzz" Jan 30 13:41:03.822457 kubelet[2501]: E0130 13:41:03.822321 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:03.823036 containerd[1462]: time="2025-01-30T13:41:03.823001219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npdzz,Uid:20281408-62ee-4bae-b418-db74a070d3aa,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:03.847564 containerd[1462]: time="2025-01-30T13:41:03.847347282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:03.847564 containerd[1462]: time="2025-01-30T13:41:03.847446762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:03.849546 containerd[1462]: time="2025-01-30T13:41:03.847531854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:03.849546 containerd[1462]: time="2025-01-30T13:41:03.847626263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:03.862956 systemd[1]: run-containerd-runc-k8s.io-b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e-runc.humoxu.mount: Deactivated successfully. Jan 30 13:41:03.870815 systemd[1]: Started cri-containerd-b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e.scope - libcontainer container b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e. Jan 30 13:41:03.877108 systemd[1]: Created slice kubepods-besteffort-pod225726de_c4bc_456d_964e_a7290ef42348.slice - libcontainer container kubepods-besteffort-pod225726de_c4bc_456d_964e_a7290ef42348.slice. Jan 30 13:41:03.895855 containerd[1462]: time="2025-01-30T13:41:03.895778352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npdzz,Uid:20281408-62ee-4bae-b418-db74a070d3aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e\"" Jan 30 13:41:03.896522 kubelet[2501]: E0130 13:41:03.896474 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:03.898803 containerd[1462]: time="2025-01-30T13:41:03.898766659Z" level=info msg="CreateContainer within sandbox \"b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:41:03.901226 kubelet[2501]: I0130 13:41:03.901195 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/225726de-c4bc-456d-964e-a7290ef42348-var-lib-calico\") pod \"tigera-operator-7d68577dc5-8nbvc\" (UID: \"225726de-c4bc-456d-964e-a7290ef42348\") " pod="tigera-operator/tigera-operator-7d68577dc5-8nbvc" Jan 30 13:41:03.901226 kubelet[2501]: I0130 13:41:03.901223 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sm5g\" (UniqueName: \"kubernetes.io/projected/225726de-c4bc-456d-964e-a7290ef42348-kube-api-access-5sm5g\") pod \"tigera-operator-7d68577dc5-8nbvc\" (UID: \"225726de-c4bc-456d-964e-a7290ef42348\") " pod="tigera-operator/tigera-operator-7d68577dc5-8nbvc" Jan 30 13:41:03.914543 containerd[1462]: time="2025-01-30T13:41:03.914491174Z" level=info msg="CreateContainer within sandbox \"b54b41996613fe1884ec95da95ba4a38bdb57589a58c39fa1d4128daf709138e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bb3dd93376c757eee90ec82c3132d3472cfe8246c4c515d63ec667d88f58f71\"" Jan 30 13:41:03.915031 containerd[1462]: time="2025-01-30T13:41:03.914990274Z" level=info msg="StartContainer for \"2bb3dd93376c757eee90ec82c3132d3472cfe8246c4c515d63ec667d88f58f71\"" Jan 30 13:41:03.943644 systemd[1]: Started cri-containerd-2bb3dd93376c757eee90ec82c3132d3472cfe8246c4c515d63ec667d88f58f71.scope - libcontainer container 2bb3dd93376c757eee90ec82c3132d3472cfe8246c4c515d63ec667d88f58f71. Jan 30 13:41:03.970984 containerd[1462]: time="2025-01-30T13:41:03.970933860Z" level=info msg="StartContainer for \"2bb3dd93376c757eee90ec82c3132d3472cfe8246c4c515d63ec667d88f58f71\" returns successfully" Jan 30 13:41:04.180860 containerd[1462]: time="2025-01-30T13:41:04.180740590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-8nbvc,Uid:225726de-c4bc-456d-964e-a7290ef42348,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:41:04.206325 containerd[1462]: time="2025-01-30T13:41:04.206217862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:04.206953 containerd[1462]: time="2025-01-30T13:41:04.206818685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:04.206953 containerd[1462]: time="2025-01-30T13:41:04.206849402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:04.207217 containerd[1462]: time="2025-01-30T13:41:04.206922852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:04.225657 systemd[1]: Started cri-containerd-f35f3bc843247c9dbe73e2a8300d899561c87be6c1f84ee0f7ffefc2962b8225.scope - libcontainer container f35f3bc843247c9dbe73e2a8300d899561c87be6c1f84ee0f7ffefc2962b8225. Jan 30 13:41:04.261304 containerd[1462]: time="2025-01-30T13:41:04.261257962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-8nbvc,Uid:225726de-c4bc-456d-964e-a7290ef42348,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f35f3bc843247c9dbe73e2a8300d899561c87be6c1f84ee0f7ffefc2962b8225\"" Jan 30 13:41:04.262933 containerd[1462]: time="2025-01-30T13:41:04.262901527Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:41:04.581059 kubelet[2501]: E0130 13:41:04.580991 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:04.621260 kubelet[2501]: I0130 13:41:04.621198 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-npdzz" podStartSLOduration=1.621178647 podStartE2EDuration="1.621178647s" podCreationTimestamp="2025-01-30 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:04.620984057 +0000 UTC m=+8.150156349" watchObservedRunningTime="2025-01-30 13:41:04.621178647 +0000 UTC m=+8.150350939" Jan 30 13:41:06.410355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194558828.mount: Deactivated successfully. Jan 30 13:41:07.107922 kubelet[2501]: E0130 13:41:07.107601 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:07.116542 containerd[1462]: time="2025-01-30T13:41:07.116462883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:07.117522 containerd[1462]: time="2025-01-30T13:41:07.117307415Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:41:07.118644 containerd[1462]: time="2025-01-30T13:41:07.118612410Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:07.120861 containerd[1462]: time="2025-01-30T13:41:07.120834856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:07.121546 containerd[1462]: time="2025-01-30T13:41:07.121476722Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.858534668s" Jan 30 13:41:07.121633 containerd[1462]: time="2025-01-30T13:41:07.121542969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:41:07.123610 containerd[1462]: time="2025-01-30T13:41:07.123576345Z" level=info msg="CreateContainer within sandbox \"f35f3bc843247c9dbe73e2a8300d899561c87be6c1f84ee0f7ffefc2962b8225\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:41:07.136406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409910897.mount: Deactivated successfully. Jan 30 13:41:07.136632 containerd[1462]: time="2025-01-30T13:41:07.136556856Z" level=info msg="CreateContainer within sandbox \"f35f3bc843247c9dbe73e2a8300d899561c87be6c1f84ee0f7ffefc2962b8225\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"163edaa4487a93e734b757d2bba753f9f0eb22ef4b502207c26ebede537ee9ac\"" Jan 30 13:41:07.137313 containerd[1462]: time="2025-01-30T13:41:07.137250682Z" level=info msg="StartContainer for \"163edaa4487a93e734b757d2bba753f9f0eb22ef4b502207c26ebede537ee9ac\"" Jan 30 13:41:07.165632 systemd[1]: Started cri-containerd-163edaa4487a93e734b757d2bba753f9f0eb22ef4b502207c26ebede537ee9ac.scope - libcontainer container 163edaa4487a93e734b757d2bba753f9f0eb22ef4b502207c26ebede537ee9ac. Jan 30 13:41:07.189552 containerd[1462]: time="2025-01-30T13:41:07.189495284Z" level=info msg="StartContainer for \"163edaa4487a93e734b757d2bba753f9f0eb22ef4b502207c26ebede537ee9ac\" returns successfully" Jan 30 13:41:07.587647 kubelet[2501]: E0130 13:41:07.587610 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:08.224007 kubelet[2501]: E0130 13:41:08.223782 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:08.232186 kubelet[2501]: I0130 13:41:08.232017 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-8nbvc" podStartSLOduration=2.37222094 podStartE2EDuration="5.231998629s" podCreationTimestamp="2025-01-30 13:41:03 +0000 UTC" firstStartedPulling="2025-01-30 13:41:04.262459416 +0000 UTC m=+7.791631708" lastFinishedPulling="2025-01-30 13:41:07.122237105 +0000 UTC m=+10.651409397" observedRunningTime="2025-01-30 13:41:07.596216988 +0000 UTC m=+11.125389280" watchObservedRunningTime="2025-01-30 13:41:08.231998629 +0000 UTC m=+11.761170921" Jan 30 13:41:08.589071 kubelet[2501]: E0130 13:41:08.588957 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:08.849711 update_engine[1453]: I20250130 13:41:08.849562 1453 update_attempter.cc:509] Updating boot flags... Jan 30 13:41:08.875585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2892) Jan 30 13:41:08.914761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2893) Jan 30 13:41:08.939523 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2893) Jan 30 13:41:09.929657 systemd[1]: Created slice kubepods-besteffort-poda907322e_446f_40b3_98c7_0d12c9761067.slice - libcontainer container kubepods-besteffort-poda907322e_446f_40b3_98c7_0d12c9761067.slice. Jan 30 13:41:09.939606 kubelet[2501]: I0130 13:41:09.937977 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a907322e-446f-40b3-98c7-0d12c9761067-tigera-ca-bundle\") pod \"calico-typha-666bcbdb58-th8hn\" (UID: \"a907322e-446f-40b3-98c7-0d12c9761067\") " pod="calico-system/calico-typha-666bcbdb58-th8hn" Jan 30 13:41:09.939606 kubelet[2501]: I0130 13:41:09.938020 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a907322e-446f-40b3-98c7-0d12c9761067-typha-certs\") pod \"calico-typha-666bcbdb58-th8hn\" (UID: \"a907322e-446f-40b3-98c7-0d12c9761067\") " pod="calico-system/calico-typha-666bcbdb58-th8hn" Jan 30 13:41:09.939606 kubelet[2501]: I0130 13:41:09.938039 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkkq\" (UniqueName: \"kubernetes.io/projected/a907322e-446f-40b3-98c7-0d12c9761067-kube-api-access-4pkkq\") pod \"calico-typha-666bcbdb58-th8hn\" (UID: \"a907322e-446f-40b3-98c7-0d12c9761067\") " pod="calico-system/calico-typha-666bcbdb58-th8hn" Jan 30 13:41:10.023558 systemd[1]: Created slice kubepods-besteffort-podf5876bdd_eaaa_4c72_84ca_a8b874734d0f.slice - libcontainer container kubepods-besteffort-podf5876bdd_eaaa_4c72_84ca_a8b874734d0f.slice. Jan 30 13:41:10.038415 kubelet[2501]: I0130 13:41:10.038359 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-policysync\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.038415 kubelet[2501]: I0130 13:41:10.038421 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-tigera-ca-bundle\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.038638 kubelet[2501]: I0130 13:41:10.038443 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-cni-bin-dir\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.038638 kubelet[2501]: I0130 13:41:10.038462 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-cni-net-dir\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.038638 kubelet[2501]: I0130 13:41:10.038488 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-lib-modules\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.041995 kubelet[2501]: I0130 13:41:10.041937 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-xtables-lock\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042104 kubelet[2501]: I0130 13:41:10.042002 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-var-lib-calico\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042104 kubelet[2501]: I0130 13:41:10.042026 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-flexvol-driver-host\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042104 kubelet[2501]: I0130 13:41:10.042055 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-cni-log-dir\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042202 kubelet[2501]: I0130 13:41:10.042107 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-node-certs\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042202 kubelet[2501]: I0130 13:41:10.042130 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-var-run-calico\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.042202 kubelet[2501]: I0130 13:41:10.042149 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs75l\" (UniqueName: \"kubernetes.io/projected/f5876bdd-eaaa-4c72-84ca-a8b874734d0f-kube-api-access-bs75l\") pod \"calico-node-s6k4c\" (UID: \"f5876bdd-eaaa-4c72-84ca-a8b874734d0f\") " pod="calico-system/calico-node-s6k4c" Jan 30 13:41:10.125083 kubelet[2501]: E0130 13:41:10.125034 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:10.142473 kubelet[2501]: I0130 13:41:10.142421 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23f7c933-d0e1-4d42-a085-53875d9b091a-registration-dir\") pod \"csi-node-driver-9g6zr\" (UID: \"23f7c933-d0e1-4d42-a085-53875d9b091a\") " pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:10.142639 kubelet[2501]: I0130 13:41:10.142519 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23f7c933-d0e1-4d42-a085-53875d9b091a-socket-dir\") pod \"csi-node-driver-9g6zr\" (UID: \"23f7c933-d0e1-4d42-a085-53875d9b091a\") " pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:10.142639 kubelet[2501]: I0130 13:41:10.142552 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23f7c933-d0e1-4d42-a085-53875d9b091a-varrun\") pod \"csi-node-driver-9g6zr\" (UID: \"23f7c933-d0e1-4d42-a085-53875d9b091a\") " pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:10.142639 kubelet[2501]: I0130 13:41:10.142591 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f7c933-d0e1-4d42-a085-53875d9b091a-kubelet-dir\") pod \"csi-node-driver-9g6zr\" (UID: \"23f7c933-d0e1-4d42-a085-53875d9b091a\") " pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:10.142639 kubelet[2501]: I0130 13:41:10.142605 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26qz\" (UniqueName: \"kubernetes.io/projected/23f7c933-d0e1-4d42-a085-53875d9b091a-kube-api-access-s26qz\") pod \"csi-node-driver-9g6zr\" (UID: \"23f7c933-d0e1-4d42-a085-53875d9b091a\") " pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:10.144163 kubelet[2501]: E0130 13:41:10.144129 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.144163 kubelet[2501]: W0130 13:41:10.144152 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.144163 kubelet[2501]: E0130 13:41:10.144174 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.144411 kubelet[2501]: E0130 13:41:10.144396 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.144411 kubelet[2501]: W0130 13:41:10.144407 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.144498 kubelet[2501]: E0130 13:41:10.144421 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.144651 kubelet[2501]: E0130 13:41:10.144630 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.144651 kubelet[2501]: W0130 13:41:10.144641 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.144701 kubelet[2501]: E0130 13:41:10.144654 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.144982 kubelet[2501]: E0130 13:41:10.144949 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.144982 kubelet[2501]: W0130 13:41:10.144975 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.145041 kubelet[2501]: E0130 13:41:10.145004 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.145254 kubelet[2501]: E0130 13:41:10.145230 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.145254 kubelet[2501]: W0130 13:41:10.145242 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.145336 kubelet[2501]: E0130 13:41:10.145255 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.145899 kubelet[2501]: E0130 13:41:10.145869 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.145899 kubelet[2501]: W0130 13:41:10.145886 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.145899 kubelet[2501]: E0130 13:41:10.145898 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.149800 kubelet[2501]: E0130 13:41:10.149669 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.149800 kubelet[2501]: W0130 13:41:10.149685 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.149800 kubelet[2501]: E0130 13:41:10.149703 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.149940 kubelet[2501]: E0130 13:41:10.149916 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.149940 kubelet[2501]: W0130 13:41:10.149935 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.150018 kubelet[2501]: E0130 13:41:10.149948 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.156205 kubelet[2501]: E0130 13:41:10.156161 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.156205 kubelet[2501]: W0130 13:41:10.156203 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.156305 kubelet[2501]: E0130 13:41:10.156226 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.233404 kubelet[2501]: E0130 13:41:10.233358 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:10.233955 containerd[1462]: time="2025-01-30T13:41:10.233917068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-666bcbdb58-th8hn,Uid:a907322e-446f-40b3-98c7-0d12c9761067,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:10.244028 kubelet[2501]: E0130 13:41:10.243997 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.244028 kubelet[2501]: W0130 13:41:10.244015 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.244028 kubelet[2501]: E0130 13:41:10.244032 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.244379 kubelet[2501]: E0130 13:41:10.244348 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.244379 kubelet[2501]: W0130 13:41:10.244360 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.244379 kubelet[2501]: E0130 13:41:10.244373 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.244636 kubelet[2501]: E0130 13:41:10.244621 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.244636 kubelet[2501]: W0130 13:41:10.244632 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.244694 kubelet[2501]: E0130 13:41:10.244652 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.244924 kubelet[2501]: E0130 13:41:10.244891 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.244924 kubelet[2501]: W0130 13:41:10.244915 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.245068 kubelet[2501]: E0130 13:41:10.244948 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.245222 kubelet[2501]: E0130 13:41:10.245208 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.245222 kubelet[2501]: W0130 13:41:10.245218 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.245276 kubelet[2501]: E0130 13:41:10.245232 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.245457 kubelet[2501]: E0130 13:41:10.245440 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.245483 kubelet[2501]: W0130 13:41:10.245454 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.245483 kubelet[2501]: E0130 13:41:10.245472 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.245702 kubelet[2501]: E0130 13:41:10.245688 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.245702 kubelet[2501]: W0130 13:41:10.245701 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.245750 kubelet[2501]: E0130 13:41:10.245730 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.245900 kubelet[2501]: E0130 13:41:10.245886 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.245900 kubelet[2501]: W0130 13:41:10.245896 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.245947 kubelet[2501]: E0130 13:41:10.245923 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.246097 kubelet[2501]: E0130 13:41:10.246081 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.246097 kubelet[2501]: W0130 13:41:10.246093 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.246149 kubelet[2501]: E0130 13:41:10.246120 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.246296 kubelet[2501]: E0130 13:41:10.246274 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.246296 kubelet[2501]: W0130 13:41:10.246293 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.246348 kubelet[2501]: E0130 13:41:10.246317 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.246501 kubelet[2501]: E0130 13:41:10.246487 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.246542 kubelet[2501]: W0130 13:41:10.246498 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.246566 kubelet[2501]: E0130 13:41:10.246541 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.246713 kubelet[2501]: E0130 13:41:10.246699 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.246713 kubelet[2501]: W0130 13:41:10.246710 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.246760 kubelet[2501]: E0130 13:41:10.246723 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.246947 kubelet[2501]: E0130 13:41:10.246932 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.246947 kubelet[2501]: W0130 13:41:10.246944 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.247002 kubelet[2501]: E0130 13:41:10.246960 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.247180 kubelet[2501]: E0130 13:41:10.247166 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.247180 kubelet[2501]: W0130 13:41:10.247176 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.247230 kubelet[2501]: E0130 13:41:10.247191 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.247409 kubelet[2501]: E0130 13:41:10.247393 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.247409 kubelet[2501]: W0130 13:41:10.247406 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.247545 kubelet[2501]: E0130 13:41:10.247420 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.247659 kubelet[2501]: E0130 13:41:10.247640 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.247659 kubelet[2501]: W0130 13:41:10.247650 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.247713 kubelet[2501]: E0130 13:41:10.247677 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.247855 kubelet[2501]: E0130 13:41:10.247837 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.247855 kubelet[2501]: W0130 13:41:10.247847 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.247899 kubelet[2501]: E0130 13:41:10.247873 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.248063 kubelet[2501]: E0130 13:41:10.248048 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.248063 kubelet[2501]: W0130 13:41:10.248058 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.248235 kubelet[2501]: E0130 13:41:10.248091 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.248319 kubelet[2501]: E0130 13:41:10.248300 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.248319 kubelet[2501]: W0130 13:41:10.248311 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.248365 kubelet[2501]: E0130 13:41:10.248335 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.248560 kubelet[2501]: E0130 13:41:10.248541 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.248560 kubelet[2501]: W0130 13:41:10.248553 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.248613 kubelet[2501]: E0130 13:41:10.248575 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.248786 kubelet[2501]: E0130 13:41:10.248769 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.248786 kubelet[2501]: W0130 13:41:10.248781 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.248863 kubelet[2501]: E0130 13:41:10.248794 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.249085 kubelet[2501]: E0130 13:41:10.249059 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.249116 kubelet[2501]: W0130 13:41:10.249084 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.249150 kubelet[2501]: E0130 13:41:10.249115 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.249409 kubelet[2501]: E0130 13:41:10.249394 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.249409 kubelet[2501]: W0130 13:41:10.249407 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.249460 kubelet[2501]: E0130 13:41:10.249417 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.249754 kubelet[2501]: E0130 13:41:10.249738 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.249754 kubelet[2501]: W0130 13:41:10.249753 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.249815 kubelet[2501]: E0130 13:41:10.249766 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.307752 kubelet[2501]: E0130 13:41:10.307722 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.307752 kubelet[2501]: W0130 13:41:10.307746 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.307828 kubelet[2501]: E0130 13:41:10.307764 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.316053 kubelet[2501]: E0130 13:41:10.316028 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:10.316053 kubelet[2501]: W0130 13:41:10.316044 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:10.316053 kubelet[2501]: E0130 13:41:10.316059 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:10.328658 kubelet[2501]: E0130 13:41:10.328621 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:10.329236 containerd[1462]: time="2025-01-30T13:41:10.329199939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6k4c,Uid:f5876bdd-eaaa-4c72-84ca-a8b874734d0f,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:10.334104 containerd[1462]: time="2025-01-30T13:41:10.333926906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:10.334104 containerd[1462]: time="2025-01-30T13:41:10.333992791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:10.334104 containerd[1462]: time="2025-01-30T13:41:10.334020934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:10.334422 containerd[1462]: time="2025-01-30T13:41:10.334177280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:10.356448 containerd[1462]: time="2025-01-30T13:41:10.356222890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:10.356448 containerd[1462]: time="2025-01-30T13:41:10.356295136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:10.356448 containerd[1462]: time="2025-01-30T13:41:10.356307961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:10.356448 containerd[1462]: time="2025-01-30T13:41:10.356390076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:10.356690 systemd[1]: Started cri-containerd-bd447e114ac2039e64ebc28ba66b21f1cdb6630f8698af143517dc68d6620db5.scope - libcontainer container bd447e114ac2039e64ebc28ba66b21f1cdb6630f8698af143517dc68d6620db5. Jan 30 13:41:10.374658 systemd[1]: Started cri-containerd-3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a.scope - libcontainer container 3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a. Jan 30 13:41:10.397532 containerd[1462]: time="2025-01-30T13:41:10.397435281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-666bcbdb58-th8hn,Uid:a907322e-446f-40b3-98c7-0d12c9761067,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd447e114ac2039e64ebc28ba66b21f1cdb6630f8698af143517dc68d6620db5\"" Jan 30 13:41:10.398330 kubelet[2501]: E0130 13:41:10.398259 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:10.400147 containerd[1462]: time="2025-01-30T13:41:10.400111346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:41:10.401589 containerd[1462]: time="2025-01-30T13:41:10.401563445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6k4c,Uid:f5876bdd-eaaa-4c72-84ca-a8b874734d0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\"" Jan 30 13:41:10.402847 kubelet[2501]: E0130 13:41:10.402760 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:11.558015 kubelet[2501]: E0130 13:41:11.557953 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:11.756755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421736063.mount: Deactivated successfully. Jan 30 13:41:13.221828 containerd[1462]: time="2025-01-30T13:41:13.221772818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:13.222477 containerd[1462]: time="2025-01-30T13:41:13.222440850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:41:13.223469 containerd[1462]: time="2025-01-30T13:41:13.223430721Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:13.225778 containerd[1462]: time="2025-01-30T13:41:13.225734154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:13.226434 containerd[1462]: time="2025-01-30T13:41:13.226411914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.826262617s" Jan 30 13:41:13.226468 containerd[1462]: time="2025-01-30T13:41:13.226438444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:41:13.228199 containerd[1462]: time="2025-01-30T13:41:13.227782194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:41:13.235746 containerd[1462]: time="2025-01-30T13:41:13.235664269Z" level=info msg="CreateContainer within sandbox \"bd447e114ac2039e64ebc28ba66b21f1cdb6630f8698af143517dc68d6620db5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:41:13.248628 containerd[1462]: time="2025-01-30T13:41:13.248586329Z" level=info msg="CreateContainer within sandbox \"bd447e114ac2039e64ebc28ba66b21f1cdb6630f8698af143517dc68d6620db5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5dbf0230c22a22e3d163e7b7a690731520f11f18272be31d288aa37c73b769d4\"" Jan 30 13:41:13.249227 containerd[1462]: time="2025-01-30T13:41:13.248986635Z" level=info msg="StartContainer for \"5dbf0230c22a22e3d163e7b7a690731520f11f18272be31d288aa37c73b769d4\"" Jan 30 13:41:13.275657 systemd[1]: Started cri-containerd-5dbf0230c22a22e3d163e7b7a690731520f11f18272be31d288aa37c73b769d4.scope - libcontainer container 5dbf0230c22a22e3d163e7b7a690731520f11f18272be31d288aa37c73b769d4. Jan 30 13:41:13.315397 containerd[1462]: time="2025-01-30T13:41:13.315353992Z" level=info msg="StartContainer for \"5dbf0230c22a22e3d163e7b7a690731520f11f18272be31d288aa37c73b769d4\" returns successfully" Jan 30 13:41:13.558628 kubelet[2501]: E0130 13:41:13.558487 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:13.600595 kubelet[2501]: E0130 13:41:13.600498 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:13.610032 kubelet[2501]: I0130 13:41:13.609970 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-666bcbdb58-th8hn" podStartSLOduration=1.7812503400000002 podStartE2EDuration="4.608990459s" podCreationTimestamp="2025-01-30 13:41:09 +0000 UTC" firstStartedPulling="2025-01-30 13:41:10.399776142 +0000 UTC m=+13.928948444" lastFinishedPulling="2025-01-30 13:41:13.227516271 +0000 UTC m=+16.756688563" observedRunningTime="2025-01-30 13:41:13.60878458 +0000 UTC m=+17.137956872" watchObservedRunningTime="2025-01-30 13:41:13.608990459 +0000 UTC m=+17.138162751" Jan 30 13:41:13.646323 kubelet[2501]: E0130 13:41:13.646279 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.646323 kubelet[2501]: W0130 13:41:13.646303 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.646475 kubelet[2501]: E0130 13:41:13.646340 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.646638 kubelet[2501]: E0130 13:41:13.646614 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.646638 kubelet[2501]: W0130 13:41:13.646631 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.646690 kubelet[2501]: E0130 13:41:13.646640 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.646858 kubelet[2501]: E0130 13:41:13.646834 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.646858 kubelet[2501]: W0130 13:41:13.646846 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.646858 kubelet[2501]: E0130 13:41:13.646855 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.647195 kubelet[2501]: E0130 13:41:13.647177 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.647225 kubelet[2501]: W0130 13:41:13.647194 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.647225 kubelet[2501]: E0130 13:41:13.647213 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.647486 kubelet[2501]: E0130 13:41:13.647471 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.647486 kubelet[2501]: W0130 13:41:13.647483 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.647565 kubelet[2501]: E0130 13:41:13.647492 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.647738 kubelet[2501]: E0130 13:41:13.647724 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.647738 kubelet[2501]: W0130 13:41:13.647735 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.647791 kubelet[2501]: E0130 13:41:13.647743 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.647936 kubelet[2501]: E0130 13:41:13.647923 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.647936 kubelet[2501]: W0130 13:41:13.647933 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.647993 kubelet[2501]: E0130 13:41:13.647940 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.648130 kubelet[2501]: E0130 13:41:13.648117 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.648130 kubelet[2501]: W0130 13:41:13.648127 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.648181 kubelet[2501]: E0130 13:41:13.648135 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.648331 kubelet[2501]: E0130 13:41:13.648318 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.648331 kubelet[2501]: W0130 13:41:13.648328 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.648386 kubelet[2501]: E0130 13:41:13.648336 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.648532 kubelet[2501]: E0130 13:41:13.648501 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.648532 kubelet[2501]: W0130 13:41:13.648529 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.648586 kubelet[2501]: E0130 13:41:13.648538 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.648742 kubelet[2501]: E0130 13:41:13.648728 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.648742 kubelet[2501]: W0130 13:41:13.648738 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.648791 kubelet[2501]: E0130 13:41:13.648746 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.648949 kubelet[2501]: E0130 13:41:13.648934 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.648949 kubelet[2501]: W0130 13:41:13.648944 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.649005 kubelet[2501]: E0130 13:41:13.648953 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.649139 kubelet[2501]: E0130 13:41:13.649126 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.649139 kubelet[2501]: W0130 13:41:13.649138 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.649194 kubelet[2501]: E0130 13:41:13.649145 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.649348 kubelet[2501]: E0130 13:41:13.649335 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.649348 kubelet[2501]: W0130 13:41:13.649344 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.649392 kubelet[2501]: E0130 13:41:13.649352 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.649589 kubelet[2501]: E0130 13:41:13.649574 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.649589 kubelet[2501]: W0130 13:41:13.649585 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.649658 kubelet[2501]: E0130 13:41:13.649593 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.670405 kubelet[2501]: E0130 13:41:13.670380 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.670405 kubelet[2501]: W0130 13:41:13.670393 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.670405 kubelet[2501]: E0130 13:41:13.670403 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.670662 kubelet[2501]: E0130 13:41:13.670641 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.670662 kubelet[2501]: W0130 13:41:13.670652 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.670726 kubelet[2501]: E0130 13:41:13.670666 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.670939 kubelet[2501]: E0130 13:41:13.670922 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.670971 kubelet[2501]: W0130 13:41:13.670940 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.671001 kubelet[2501]: E0130 13:41:13.670968 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.671245 kubelet[2501]: E0130 13:41:13.671234 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.671245 kubelet[2501]: W0130 13:41:13.671243 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.671305 kubelet[2501]: E0130 13:41:13.671265 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.671481 kubelet[2501]: E0130 13:41:13.671469 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.671481 kubelet[2501]: W0130 13:41:13.671479 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.671547 kubelet[2501]: E0130 13:41:13.671492 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.671693 kubelet[2501]: E0130 13:41:13.671681 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.671693 kubelet[2501]: W0130 13:41:13.671690 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.671739 kubelet[2501]: E0130 13:41:13.671718 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.671913 kubelet[2501]: E0130 13:41:13.671902 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.671913 kubelet[2501]: W0130 13:41:13.671911 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.671952 kubelet[2501]: E0130 13:41:13.671941 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.672099 kubelet[2501]: E0130 13:41:13.672087 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.672099 kubelet[2501]: W0130 13:41:13.672095 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.672154 kubelet[2501]: E0130 13:41:13.672119 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.672325 kubelet[2501]: E0130 13:41:13.672313 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.672325 kubelet[2501]: W0130 13:41:13.672324 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.672370 kubelet[2501]: E0130 13:41:13.672337 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.672562 kubelet[2501]: E0130 13:41:13.672545 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.672562 kubelet[2501]: W0130 13:41:13.672557 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.672622 kubelet[2501]: E0130 13:41:13.672571 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.672775 kubelet[2501]: E0130 13:41:13.672762 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.672775 kubelet[2501]: W0130 13:41:13.672772 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.672817 kubelet[2501]: E0130 13:41:13.672785 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.673003 kubelet[2501]: E0130 13:41:13.672988 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.673003 kubelet[2501]: W0130 13:41:13.672999 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.673054 kubelet[2501]: E0130 13:41:13.673012 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.673326 kubelet[2501]: E0130 13:41:13.673299 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.673326 kubelet[2501]: W0130 13:41:13.673317 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.673373 kubelet[2501]: E0130 13:41:13.673334 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.673557 kubelet[2501]: E0130 13:41:13.673536 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.673557 kubelet[2501]: W0130 13:41:13.673548 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.673608 kubelet[2501]: E0130 13:41:13.673561 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.673782 kubelet[2501]: E0130 13:41:13.673769 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.673782 kubelet[2501]: W0130 13:41:13.673780 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.673825 kubelet[2501]: E0130 13:41:13.673795 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.673996 kubelet[2501]: E0130 13:41:13.673983 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.673996 kubelet[2501]: W0130 13:41:13.673993 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.674046 kubelet[2501]: E0130 13:41:13.674005 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.674247 kubelet[2501]: E0130 13:41:13.674223 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.674247 kubelet[2501]: W0130 13:41:13.674236 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.674247 kubelet[2501]: E0130 13:41:13.674245 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:13.674648 kubelet[2501]: E0130 13:41:13.674633 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:41:13.674648 kubelet[2501]: W0130 13:41:13.674644 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:41:13.674713 kubelet[2501]: E0130 13:41:13.674653 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:41:14.514734 containerd[1462]: time="2025-01-30T13:41:14.514687438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:14.515537 containerd[1462]: time="2025-01-30T13:41:14.515481438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:41:14.516742 containerd[1462]: time="2025-01-30T13:41:14.516704758Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:14.518984 containerd[1462]: time="2025-01-30T13:41:14.518892711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:14.519680 containerd[1462]: time="2025-01-30T13:41:14.519650982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.291696563s" Jan 30 13:41:14.519749 containerd[1462]: time="2025-01-30T13:41:14.519679677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:41:14.521448 containerd[1462]: time="2025-01-30T13:41:14.521412240Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:41:14.539385 containerd[1462]: time="2025-01-30T13:41:14.538360060Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634\"" Jan 30 13:41:14.539385 containerd[1462]: time="2025-01-30T13:41:14.539115348Z" level=info msg="StartContainer for \"5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634\"" Jan 30 13:41:14.574701 systemd[1]: Started cri-containerd-5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634.scope - libcontainer container 5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634. Jan 30 13:41:14.607555 kubelet[2501]: I0130 13:41:14.607084 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:14.607555 kubelet[2501]: E0130 13:41:14.607460 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:14.610769 containerd[1462]: time="2025-01-30T13:41:14.610640185Z" level=info msg="StartContainer for \"5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634\" returns successfully" Jan 30 13:41:14.630262 systemd[1]: cri-containerd-5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634.scope: Deactivated successfully. Jan 30 13:41:14.848167 containerd[1462]: time="2025-01-30T13:41:14.845910098Z" level=info msg="shim disconnected" id=5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634 namespace=k8s.io Jan 30 13:41:14.848167 containerd[1462]: time="2025-01-30T13:41:14.848073383Z" level=warning msg="cleaning up after shim disconnected" id=5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634 namespace=k8s.io Jan 30 13:41:14.848167 containerd[1462]: time="2025-01-30T13:41:14.848084044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:15.233195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a245b47272f66be8a6e772bd483ef840613081ac146996a139f9f7047e24634-rootfs.mount: Deactivated successfully. Jan 30 13:41:15.558805 kubelet[2501]: E0130 13:41:15.558659 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:15.609287 kubelet[2501]: E0130 13:41:15.609130 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:15.609995 containerd[1462]: time="2025-01-30T13:41:15.609964102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:41:15.728595 kubelet[2501]: I0130 13:41:15.728542 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:15.728966 kubelet[2501]: E0130 13:41:15.728940 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:16.610728 kubelet[2501]: E0130 13:41:16.610690 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:17.557976 kubelet[2501]: E0130 13:41:17.557937 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:19.557816 kubelet[2501]: E0130 13:41:19.557745 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:21.436994 containerd[1462]: time="2025-01-30T13:41:21.436952822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:21.437845 containerd[1462]: time="2025-01-30T13:41:21.437788506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:41:21.439088 containerd[1462]: time="2025-01-30T13:41:21.439057185Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:21.441847 containerd[1462]: time="2025-01-30T13:41:21.441817345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:21.442708 containerd[1462]: time="2025-01-30T13:41:21.442653100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.8326476s" Jan 30 13:41:21.442747 containerd[1462]: time="2025-01-30T13:41:21.442708985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:41:21.444232 containerd[1462]: time="2025-01-30T13:41:21.444206146Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:41:21.459116 containerd[1462]: time="2025-01-30T13:41:21.459082259Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b\"" Jan 30 13:41:21.459433 containerd[1462]: time="2025-01-30T13:41:21.459401380Z" level=info msg="StartContainer for \"ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b\"" Jan 30 13:41:21.495750 systemd[1]: Started cri-containerd-ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b.scope - libcontainer container ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b. Jan 30 13:41:21.522752 containerd[1462]: time="2025-01-30T13:41:21.522714529Z" level=info msg="StartContainer for \"ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b\" returns successfully" Jan 30 13:41:21.558243 kubelet[2501]: E0130 13:41:21.558196 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:21.631229 kubelet[2501]: E0130 13:41:21.631194 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:22.486850 systemd[1]: cri-containerd-ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b.scope: Deactivated successfully. Jan 30 13:41:22.507162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b-rootfs.mount: Deactivated successfully. Jan 30 13:41:22.518590 kubelet[2501]: I0130 13:41:22.518552 2501 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:41:22.641066 systemd[1]: Created slice kubepods-burstable-pod5286e518_a601_45d8_b742_fd5b70c8b40f.slice - libcontainer container kubepods-burstable-pod5286e518_a601_45d8_b742_fd5b70c8b40f.slice. Jan 30 13:41:22.642476 kubelet[2501]: E0130 13:41:22.641175 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:22.647334 systemd[1]: Created slice kubepods-burstable-pod7cc98b6c_2623_4585_9bb5_117c79a9fe02.slice - libcontainer container kubepods-burstable-pod7cc98b6c_2623_4585_9bb5_117c79a9fe02.slice. Jan 30 13:41:22.652839 systemd[1]: Created slice kubepods-besteffort-pod4f7442c3_8bdd_40c7_a454_8cfac24075e7.slice - libcontainer container kubepods-besteffort-pod4f7442c3_8bdd_40c7_a454_8cfac24075e7.slice. Jan 30 13:41:22.656278 systemd[1]: Created slice kubepods-besteffort-podf3d49e37_12b3_413c_8b6b_5cfccd4b4b80.slice - libcontainer container kubepods-besteffort-podf3d49e37_12b3_413c_8b6b_5cfccd4b4b80.slice. Jan 30 13:41:22.660624 systemd[1]: Created slice kubepods-besteffort-pode56822bd_9fb7_4fe0_827c_0d6527cef94c.slice - libcontainer container kubepods-besteffort-pode56822bd_9fb7_4fe0_827c_0d6527cef94c.slice. Jan 30 13:41:22.725370 containerd[1462]: time="2025-01-30T13:41:22.725283946Z" level=info msg="shim disconnected" id=ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b namespace=k8s.io Jan 30 13:41:22.725370 containerd[1462]: time="2025-01-30T13:41:22.725348438Z" level=warning msg="cleaning up after shim disconnected" id=ac24f1aba98871a7b80fd71a9f532c5791b5eea0876d95c68e6efbd9f6183b8b namespace=k8s.io Jan 30 13:41:22.725370 containerd[1462]: time="2025-01-30T13:41:22.725357054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:22.735012 kubelet[2501]: I0130 13:41:22.734955 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5286e518-a601-45d8-b742-fd5b70c8b40f-config-volume\") pod \"coredns-668d6bf9bc-xgt7b\" (UID: \"5286e518-a601-45d8-b742-fd5b70c8b40f\") " pod="kube-system/coredns-668d6bf9bc-xgt7b" Jan 30 13:41:22.735128 kubelet[2501]: I0130 13:41:22.734993 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3d49e37-12b3-413c-8b6b-5cfccd4b4b80-calico-apiserver-certs\") pod \"calico-apiserver-7b5f976dbf-r7hdj\" (UID: \"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80\") " pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" Jan 30 13:41:22.735128 kubelet[2501]: I0130 13:41:22.735046 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cc98b6c-2623-4585-9bb5-117c79a9fe02-config-volume\") pod \"coredns-668d6bf9bc-6wdhf\" (UID: \"7cc98b6c-2623-4585-9bb5-117c79a9fe02\") " pod="kube-system/coredns-668d6bf9bc-6wdhf" Jan 30 13:41:22.735128 kubelet[2501]: I0130 13:41:22.735061 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j2f7\" (UniqueName: \"kubernetes.io/projected/7cc98b6c-2623-4585-9bb5-117c79a9fe02-kube-api-access-7j2f7\") pod \"coredns-668d6bf9bc-6wdhf\" (UID: \"7cc98b6c-2623-4585-9bb5-117c79a9fe02\") " pod="kube-system/coredns-668d6bf9bc-6wdhf" Jan 30 13:41:22.735128 kubelet[2501]: I0130 13:41:22.735099 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm42h\" (UniqueName: \"kubernetes.io/projected/e56822bd-9fb7-4fe0-827c-0d6527cef94c-kube-api-access-mm42h\") pod \"calico-apiserver-7b5f976dbf-5c8cv\" (UID: \"e56822bd-9fb7-4fe0-827c-0d6527cef94c\") " pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" Jan 30 13:41:22.735128 kubelet[2501]: I0130 13:41:22.735126 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz2k5\" (UniqueName: \"kubernetes.io/projected/4f7442c3-8bdd-40c7-a454-8cfac24075e7-kube-api-access-gz2k5\") pod \"calico-kube-controllers-58bbf48d84-qbktp\" (UID: \"4f7442c3-8bdd-40c7-a454-8cfac24075e7\") " pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" Jan 30 13:41:22.735339 kubelet[2501]: I0130 13:41:22.735187 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5swg\" (UniqueName: \"kubernetes.io/projected/f3d49e37-12b3-413c-8b6b-5cfccd4b4b80-kube-api-access-b5swg\") pod \"calico-apiserver-7b5f976dbf-r7hdj\" (UID: \"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80\") " pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" Jan 30 13:41:22.735339 kubelet[2501]: I0130 13:41:22.735206 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghzzc\" (UniqueName: \"kubernetes.io/projected/5286e518-a601-45d8-b742-fd5b70c8b40f-kube-api-access-ghzzc\") pod \"coredns-668d6bf9bc-xgt7b\" (UID: \"5286e518-a601-45d8-b742-fd5b70c8b40f\") " pod="kube-system/coredns-668d6bf9bc-xgt7b" Jan 30 13:41:22.735339 kubelet[2501]: I0130 13:41:22.735222 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e56822bd-9fb7-4fe0-827c-0d6527cef94c-calico-apiserver-certs\") pod \"calico-apiserver-7b5f976dbf-5c8cv\" (UID: \"e56822bd-9fb7-4fe0-827c-0d6527cef94c\") " pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" Jan 30 13:41:22.735339 kubelet[2501]: I0130 13:41:22.735237 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f7442c3-8bdd-40c7-a454-8cfac24075e7-tigera-ca-bundle\") pod \"calico-kube-controllers-58bbf48d84-qbktp\" (UID: \"4f7442c3-8bdd-40c7-a454-8cfac24075e7\") " pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" Jan 30 13:41:22.945405 kubelet[2501]: E0130 13:41:22.945271 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:22.946066 containerd[1462]: time="2025-01-30T13:41:22.946024284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgt7b,Uid:5286e518-a601-45d8-b742-fd5b70c8b40f,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:22.949709 kubelet[2501]: E0130 13:41:22.949597 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:22.950420 containerd[1462]: time="2025-01-30T13:41:22.950035508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6wdhf,Uid:7cc98b6c-2623-4585-9bb5-117c79a9fe02,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:22.955952 containerd[1462]: time="2025-01-30T13:41:22.955900013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bbf48d84-qbktp,Uid:4f7442c3-8bdd-40c7-a454-8cfac24075e7,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:22.960308 containerd[1462]: time="2025-01-30T13:41:22.960241899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-r7hdj,Uid:f3d49e37-12b3-413c-8b6b-5cfccd4b4b80,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:41:22.962987 containerd[1462]: time="2025-01-30T13:41:22.962958907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-5c8cv,Uid:e56822bd-9fb7-4fe0-827c-0d6527cef94c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:41:23.063833 containerd[1462]: time="2025-01-30T13:41:23.063775657Z" level=error msg="Failed to destroy network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.064649 containerd[1462]: time="2025-01-30T13:41:23.064561987Z" level=error msg="encountered an error cleaning up failed sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.064649 containerd[1462]: time="2025-01-30T13:41:23.064607873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgt7b,Uid:5286e518-a601-45d8-b742-fd5b70c8b40f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.064967 kubelet[2501]: E0130 13:41:23.064927 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.065058 kubelet[2501]: E0130 13:41:23.065002 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xgt7b" Jan 30 13:41:23.065058 kubelet[2501]: E0130 13:41:23.065025 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xgt7b" Jan 30 13:41:23.065123 kubelet[2501]: E0130 13:41:23.065073 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xgt7b_kube-system(5286e518-a601-45d8-b742-fd5b70c8b40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xgt7b_kube-system(5286e518-a601-45d8-b742-fd5b70c8b40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xgt7b" podUID="5286e518-a601-45d8-b742-fd5b70c8b40f" Jan 30 13:41:23.067741 containerd[1462]: time="2025-01-30T13:41:23.067684045Z" level=error msg="Failed to destroy network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.068600 containerd[1462]: time="2025-01-30T13:41:23.068355880Z" level=error msg="encountered an error cleaning up failed sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.068600 containerd[1462]: time="2025-01-30T13:41:23.068451940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bbf48d84-qbktp,Uid:4f7442c3-8bdd-40c7-a454-8cfac24075e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.068806 kubelet[2501]: E0130 13:41:23.068759 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.068843 kubelet[2501]: E0130 13:41:23.068827 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" Jan 30 13:41:23.068871 kubelet[2501]: E0130 13:41:23.068848 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" Jan 30 13:41:23.068920 kubelet[2501]: E0130 13:41:23.068891 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58bbf48d84-qbktp_calico-system(4f7442c3-8bdd-40c7-a454-8cfac24075e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58bbf48d84-qbktp_calico-system(4f7442c3-8bdd-40c7-a454-8cfac24075e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" podUID="4f7442c3-8bdd-40c7-a454-8cfac24075e7" Jan 30 13:41:23.080749 containerd[1462]: time="2025-01-30T13:41:23.080706683Z" level=error msg="Failed to destroy network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.081203 containerd[1462]: time="2025-01-30T13:41:23.081179513Z" level=error msg="encountered an error cleaning up failed sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.081339 containerd[1462]: time="2025-01-30T13:41:23.081276586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-5c8cv,Uid:e56822bd-9fb7-4fe0-827c-0d6527cef94c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082377 containerd[1462]: time="2025-01-30T13:41:23.080759293Z" level=error msg="Failed to destroy network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082377 containerd[1462]: time="2025-01-30T13:41:23.081816533Z" level=error msg="encountered an error cleaning up failed sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082377 containerd[1462]: time="2025-01-30T13:41:23.081879441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-r7hdj,Uid:f3d49e37-12b3-413c-8b6b-5cfccd4b4b80,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082476 kubelet[2501]: E0130 13:41:23.081523 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082476 kubelet[2501]: E0130 13:41:23.081595 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" Jan 30 13:41:23.082476 kubelet[2501]: E0130 13:41:23.081624 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" Jan 30 13:41:23.082589 kubelet[2501]: E0130 13:41:23.081684 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5f976dbf-5c8cv_calico-apiserver(e56822bd-9fb7-4fe0-827c-0d6527cef94c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5f976dbf-5c8cv_calico-apiserver(e56822bd-9fb7-4fe0-827c-0d6527cef94c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" podUID="e56822bd-9fb7-4fe0-827c-0d6527cef94c" Jan 30 13:41:23.082589 kubelet[2501]: E0130 13:41:23.082037 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.082589 kubelet[2501]: E0130 13:41:23.082079 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" Jan 30 13:41:23.082677 kubelet[2501]: E0130 13:41:23.082100 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" Jan 30 13:41:23.082677 kubelet[2501]: E0130 13:41:23.082135 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5f976dbf-r7hdj_calico-apiserver(f3d49e37-12b3-413c-8b6b-5cfccd4b4b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5f976dbf-r7hdj_calico-apiserver(f3d49e37-12b3-413c-8b6b-5cfccd4b4b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" podUID="f3d49e37-12b3-413c-8b6b-5cfccd4b4b80" Jan 30 13:41:23.083079 containerd[1462]: time="2025-01-30T13:41:23.083044053Z" level=error msg="Failed to destroy network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.083394 containerd[1462]: time="2025-01-30T13:41:23.083371230Z" level=error msg="encountered an error cleaning up failed sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.083449 containerd[1462]: time="2025-01-30T13:41:23.083407418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6wdhf,Uid:7cc98b6c-2623-4585-9bb5-117c79a9fe02,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.083788 kubelet[2501]: E0130 13:41:23.083755 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.083862 kubelet[2501]: E0130 13:41:23.083800 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6wdhf" Jan 30 13:41:23.083862 kubelet[2501]: E0130 13:41:23.083823 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6wdhf" Jan 30 13:41:23.083930 kubelet[2501]: E0130 13:41:23.083871 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6wdhf_kube-system(7cc98b6c-2623-4585-9bb5-117c79a9fe02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6wdhf_kube-system(7cc98b6c-2623-4585-9bb5-117c79a9fe02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6wdhf" podUID="7cc98b6c-2623-4585-9bb5-117c79a9fe02" Jan 30 13:41:23.508642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8-shm.mount: Deactivated successfully. Jan 30 13:41:23.566104 systemd[1]: Created slice kubepods-besteffort-pod23f7c933_d0e1_4d42_a085_53875d9b091a.slice - libcontainer container kubepods-besteffort-pod23f7c933_d0e1_4d42_a085_53875d9b091a.slice. Jan 30 13:41:23.569225 containerd[1462]: time="2025-01-30T13:41:23.569177290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g6zr,Uid:23f7c933-d0e1-4d42-a085-53875d9b091a,Namespace:calico-system,Attempt:0,}" Jan 30 13:41:23.629327 containerd[1462]: time="2025-01-30T13:41:23.629277232Z" level=error msg="Failed to destroy network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.629683 containerd[1462]: time="2025-01-30T13:41:23.629657999Z" level=error msg="encountered an error cleaning up failed sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.629729 containerd[1462]: time="2025-01-30T13:41:23.629709867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g6zr,Uid:23f7c933-d0e1-4d42-a085-53875d9b091a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.629941 kubelet[2501]: E0130 13:41:23.629905 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.629999 kubelet[2501]: E0130 13:41:23.629962 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:23.629999 kubelet[2501]: E0130 13:41:23.629984 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9g6zr" Jan 30 13:41:23.630071 kubelet[2501]: E0130 13:41:23.630026 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9g6zr_calico-system(23f7c933-d0e1-4d42-a085-53875d9b091a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9g6zr_calico-system(23f7c933-d0e1-4d42-a085-53875d9b091a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:23.632391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf-shm.mount: Deactivated successfully. Jan 30 13:41:23.643355 kubelet[2501]: I0130 13:41:23.643329 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:23.644351 containerd[1462]: time="2025-01-30T13:41:23.643962580Z" level=info msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" Jan 30 13:41:23.644351 containerd[1462]: time="2025-01-30T13:41:23.644113735Z" level=info msg="Ensure that sandbox 57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7 in task-service has been cleanup successfully" Jan 30 13:41:23.644630 kubelet[2501]: I0130 13:41:23.644609 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:23.645433 containerd[1462]: time="2025-01-30T13:41:23.645060137Z" level=info msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" Jan 30 13:41:23.645433 containerd[1462]: time="2025-01-30T13:41:23.645224336Z" level=info msg="Ensure that sandbox 2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed in task-service has been cleanup successfully" Jan 30 13:41:23.645646 kubelet[2501]: I0130 13:41:23.645609 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:23.646266 containerd[1462]: time="2025-01-30T13:41:23.645931297Z" level=info msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" Jan 30 13:41:23.646266 containerd[1462]: time="2025-01-30T13:41:23.646195334Z" level=info msg="Ensure that sandbox 5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf in task-service has been cleanup successfully" Jan 30 13:41:23.648610 kubelet[2501]: I0130 13:41:23.648087 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:23.648689 containerd[1462]: time="2025-01-30T13:41:23.648566960Z" level=info msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" Jan 30 13:41:23.648716 containerd[1462]: time="2025-01-30T13:41:23.648699770Z" level=info msg="Ensure that sandbox 7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263 in task-service has been cleanup successfully" Jan 30 13:41:23.653991 kubelet[2501]: E0130 13:41:23.653606 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:23.664049 containerd[1462]: time="2025-01-30T13:41:23.664003011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:41:23.664896 kubelet[2501]: I0130 13:41:23.664835 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:23.666290 containerd[1462]: time="2025-01-30T13:41:23.665868173Z" level=info msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" Jan 30 13:41:23.666290 containerd[1462]: time="2025-01-30T13:41:23.666048873Z" level=info msg="Ensure that sandbox ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50 in task-service has been cleanup successfully" Jan 30 13:41:23.667219 kubelet[2501]: I0130 13:41:23.667185 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:23.667704 containerd[1462]: time="2025-01-30T13:41:23.667665507Z" level=info msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" Jan 30 13:41:23.667889 containerd[1462]: time="2025-01-30T13:41:23.667861385Z" level=info msg="Ensure that sandbox 734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8 in task-service has been cleanup successfully" Jan 30 13:41:23.687530 containerd[1462]: time="2025-01-30T13:41:23.687463641Z" level=error msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" failed" error="failed to destroy network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.687776 kubelet[2501]: E0130 13:41:23.687742 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:23.687851 kubelet[2501]: E0130 13:41:23.687799 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf"} Jan 30 13:41:23.687883 kubelet[2501]: E0130 13:41:23.687853 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23f7c933-d0e1-4d42-a085-53875d9b091a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.687883 kubelet[2501]: E0130 13:41:23.687875 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23f7c933-d0e1-4d42-a085-53875d9b091a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9g6zr" podUID="23f7c933-d0e1-4d42-a085-53875d9b091a" Jan 30 13:41:23.709894 containerd[1462]: time="2025-01-30T13:41:23.709837296Z" level=error msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" failed" error="failed to destroy network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.710475 kubelet[2501]: E0130 13:41:23.710079 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:23.710475 kubelet[2501]: E0130 13:41:23.710148 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7"} Jan 30 13:41:23.710475 kubelet[2501]: E0130 13:41:23.710188 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f7442c3-8bdd-40c7-a454-8cfac24075e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.710475 kubelet[2501]: E0130 13:41:23.710221 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f7442c3-8bdd-40c7-a454-8cfac24075e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" podUID="4f7442c3-8bdd-40c7-a454-8cfac24075e7" Jan 30 13:41:23.713590 containerd[1462]: time="2025-01-30T13:41:23.713444357Z" level=error msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" failed" error="failed to destroy network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.713747 kubelet[2501]: E0130 13:41:23.713690 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:23.713799 kubelet[2501]: E0130 13:41:23.713763 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263"} Jan 30 13:41:23.713833 kubelet[2501]: E0130 13:41:23.713805 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e56822bd-9fb7-4fe0-827c-0d6527cef94c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.713900 kubelet[2501]: E0130 13:41:23.713834 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e56822bd-9fb7-4fe0-827c-0d6527cef94c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" podUID="e56822bd-9fb7-4fe0-827c-0d6527cef94c" Jan 30 13:41:23.714234 containerd[1462]: time="2025-01-30T13:41:23.714165644Z" level=error msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" failed" error="failed to destroy network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.714438 kubelet[2501]: E0130 13:41:23.714350 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:23.714498 kubelet[2501]: E0130 13:41:23.714457 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed"} Jan 30 13:41:23.714498 kubelet[2501]: E0130 13:41:23.714488 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cc98b6c-2623-4585-9bb5-117c79a9fe02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.714636 kubelet[2501]: E0130 13:41:23.714602 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cc98b6c-2623-4585-9bb5-117c79a9fe02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6wdhf" podUID="7cc98b6c-2623-4585-9bb5-117c79a9fe02" Jan 30 13:41:23.715600 containerd[1462]: time="2025-01-30T13:41:23.715555471Z" level=error msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" failed" error="failed to destroy network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.715764 kubelet[2501]: E0130 13:41:23.715723 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:23.715764 kubelet[2501]: E0130 13:41:23.715754 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50"} Jan 30 13:41:23.715842 kubelet[2501]: E0130 13:41:23.715785 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.715842 kubelet[2501]: E0130 13:41:23.715810 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" podUID="f3d49e37-12b3-413c-8b6b-5cfccd4b4b80" Jan 30 13:41:23.723395 containerd[1462]: time="2025-01-30T13:41:23.723335048Z" level=error msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" failed" error="failed to destroy network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:41:23.723653 kubelet[2501]: E0130 13:41:23.723603 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:23.723707 kubelet[2501]: E0130 13:41:23.723666 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8"} Jan 30 13:41:23.723747 kubelet[2501]: E0130 13:41:23.723710 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5286e518-a601-45d8-b742-fd5b70c8b40f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:41:23.723810 kubelet[2501]: E0130 13:41:23.723737 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5286e518-a601-45d8-b742-fd5b70c8b40f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xgt7b" podUID="5286e518-a601-45d8-b742-fd5b70c8b40f" Jan 30 13:41:27.348016 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:53028.service - OpenSSH per-connection server daemon (10.0.0.1:53028). Jan 30 13:41:27.388569 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 53028 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:27.390179 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:27.396087 systemd-logind[1449]: New session 8 of user core. Jan 30 13:41:27.402735 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:41:27.944975 sshd[3647]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:27.949148 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:53028.service: Deactivated successfully. Jan 30 13:41:27.951330 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:41:27.952412 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:41:27.953336 systemd-logind[1449]: Removed session 8. Jan 30 13:41:29.195832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169670653.mount: Deactivated successfully. Jan 30 13:41:29.854909 containerd[1462]: time="2025-01-30T13:41:29.854850822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:29.855673 containerd[1462]: time="2025-01-30T13:41:29.855639865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:41:29.857098 containerd[1462]: time="2025-01-30T13:41:29.857067759Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:29.859235 containerd[1462]: time="2025-01-30T13:41:29.859168851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:29.859738 containerd[1462]: time="2025-01-30T13:41:29.859704076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.195345927s" Jan 30 13:41:29.859774 containerd[1462]: time="2025-01-30T13:41:29.859739614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:41:29.867007 containerd[1462]: time="2025-01-30T13:41:29.866973546Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:41:29.886095 containerd[1462]: time="2025-01-30T13:41:29.886056386Z" level=info msg="CreateContainer within sandbox \"3390af1f56308f8f9788dad51c240c00cd34144b0930af030def37e0f0d2d72a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f0973ce65b9b3f90941e9b18c7847f48ec3dcd84956fff6d3192585607b71df\"" Jan 30 13:41:29.886562 containerd[1462]: time="2025-01-30T13:41:29.886529124Z" level=info msg="StartContainer for \"9f0973ce65b9b3f90941e9b18c7847f48ec3dcd84956fff6d3192585607b71df\"" Jan 30 13:41:29.951698 systemd[1]: Started cri-containerd-9f0973ce65b9b3f90941e9b18c7847f48ec3dcd84956fff6d3192585607b71df.scope - libcontainer container 9f0973ce65b9b3f90941e9b18c7847f48ec3dcd84956fff6d3192585607b71df. Jan 30 13:41:30.079897 containerd[1462]: time="2025-01-30T13:41:30.079850451Z" level=info msg="StartContainer for \"9f0973ce65b9b3f90941e9b18c7847f48ec3dcd84956fff6d3192585607b71df\" returns successfully" Jan 30 13:41:30.095487 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:41:30.095663 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:41:30.831856 kubelet[2501]: E0130 13:41:30.831820 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:30.843694 kubelet[2501]: I0130 13:41:30.843620 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s6k4c" podStartSLOduration=1.386487411 podStartE2EDuration="20.843601882s" podCreationTimestamp="2025-01-30 13:41:10 +0000 UTC" firstStartedPulling="2025-01-30 13:41:10.403317955 +0000 UTC m=+13.932490247" lastFinishedPulling="2025-01-30 13:41:29.860432426 +0000 UTC m=+33.389604718" observedRunningTime="2025-01-30 13:41:30.842217821 +0000 UTC m=+34.371390113" watchObservedRunningTime="2025-01-30 13:41:30.843601882 +0000 UTC m=+34.372774174" Jan 30 13:41:31.451656 kernel: bpftool[3859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:41:31.692372 systemd-networkd[1403]: vxlan.calico: Link UP Jan 30 13:41:31.692383 systemd-networkd[1403]: vxlan.calico: Gained carrier Jan 30 13:41:32.958088 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:40364.service - OpenSSH per-connection server daemon (10.0.0.1:40364). Jan 30 13:41:32.995034 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 40364 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:32.996755 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:33.000720 systemd-logind[1449]: New session 9 of user core. Jan 30 13:41:33.008656 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:41:33.128854 sshd[3932]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:33.132481 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:40364.service: Deactivated successfully. Jan 30 13:41:33.134486 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:41:33.135222 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:41:33.136080 systemd-logind[1449]: Removed session 9. Jan 30 13:41:33.470664 systemd-networkd[1403]: vxlan.calico: Gained IPv6LL Jan 30 13:41:35.558280 containerd[1462]: time="2025-01-30T13:41:35.558140105Z" level=info msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.602 [INFO][3966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.603 [INFO][3966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" iface="eth0" netns="/var/run/netns/cni-7d9f075a-5dbe-23ff-1cf8-410739bd4057" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.603 [INFO][3966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" iface="eth0" netns="/var/run/netns/cni-7d9f075a-5dbe-23ff-1cf8-410739bd4057" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.603 [INFO][3966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" iface="eth0" netns="/var/run/netns/cni-7d9f075a-5dbe-23ff-1cf8-410739bd4057" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.604 [INFO][3966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.604 [INFO][3966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.651 [INFO][3973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.652 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.652 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.658 [WARNING][3973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.658 [INFO][3973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.659 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:35.664308 containerd[1462]: 2025-01-30 13:41:35.661 [INFO][3966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:35.664706 containerd[1462]: time="2025-01-30T13:41:35.664517943Z" level=info msg="TearDown network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" successfully" Jan 30 13:41:35.664706 containerd[1462]: time="2025-01-30T13:41:35.664550043Z" level=info msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" returns successfully" Jan 30 13:41:35.665304 containerd[1462]: time="2025-01-30T13:41:35.665281207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-5c8cv,Uid:e56822bd-9fb7-4fe0-827c-0d6527cef94c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:41:35.666926 systemd[1]: run-netns-cni\x2d7d9f075a\x2d5dbe\x2d23ff\x2d1cf8\x2d410739bd4057.mount: Deactivated successfully. Jan 30 13:41:35.761567 systemd-networkd[1403]: cali4b40295aa86: Link UP Jan 30 13:41:35.761772 systemd-networkd[1403]: cali4b40295aa86: Gained carrier Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.705 [INFO][3981] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0 calico-apiserver-7b5f976dbf- calico-apiserver e56822bd-9fb7-4fe0-827c-0d6527cef94c 826 0 2025-01-30 13:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5f976dbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b5f976dbf-5c8cv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b40295aa86 [] []}} ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.705 [INFO][3981] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.730 [INFO][3993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" HandleID="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.736 [INFO][3993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" HandleID="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405a40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b5f976dbf-5c8cv", "timestamp":"2025-01-30 13:41:35.730279431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.737 [INFO][3993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.737 [INFO][3993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.737 [INFO][3993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.738 [INFO][3993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.742 [INFO][3993] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.746 [INFO][3993] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.747 [INFO][3993] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.748 [INFO][3993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.748 [INFO][3993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.749 [INFO][3993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9 Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.753 [INFO][3993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.756 [INFO][3993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.756 [INFO][3993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" host="localhost" Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.756 [INFO][3993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:35.776356 containerd[1462]: 2025-01-30 13:41:35.756 [INFO][3993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" HandleID="k8s-pod-network.11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.759 [INFO][3981] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e56822bd-9fb7-4fe0-827c-0d6527cef94c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b5f976dbf-5c8cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40295aa86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.759 [INFO][3981] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.759 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b40295aa86 ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.761 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.762 [INFO][3981] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e56822bd-9fb7-4fe0-827c-0d6527cef94c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9", Pod:"calico-apiserver-7b5f976dbf-5c8cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40295aa86", MAC:"7a:3a:c3:59:99:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:35.776897 containerd[1462]: 2025-01-30 13:41:35.769 [INFO][3981] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-5c8cv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:35.806317 containerd[1462]: time="2025-01-30T13:41:35.806224448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:35.806418 containerd[1462]: time="2025-01-30T13:41:35.806372857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:35.807028 containerd[1462]: time="2025-01-30T13:41:35.806463818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.807194 containerd[1462]: time="2025-01-30T13:41:35.807149637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.829633 systemd[1]: Started cri-containerd-11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9.scope - libcontainer container 11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9. Jan 30 13:41:35.841764 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:35.870880 containerd[1462]: time="2025-01-30T13:41:35.870845824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-5c8cv,Uid:e56822bd-9fb7-4fe0-827c-0d6527cef94c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9\"" Jan 30 13:41:35.872164 containerd[1462]: time="2025-01-30T13:41:35.872141358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:41:36.558536 containerd[1462]: time="2025-01-30T13:41:36.558475506Z" level=info msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.660 [INFO][4074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.660 [INFO][4074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" iface="eth0" netns="/var/run/netns/cni-93e7db1d-7f19-fed1-5fc8-8f22359c909a" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.660 [INFO][4074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" iface="eth0" netns="/var/run/netns/cni-93e7db1d-7f19-fed1-5fc8-8f22359c909a" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.661 [INFO][4074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" iface="eth0" netns="/var/run/netns/cni-93e7db1d-7f19-fed1-5fc8-8f22359c909a" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.661 [INFO][4074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.661 [INFO][4074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.682 [INFO][4082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.682 [INFO][4082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.682 [INFO][4082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.687 [WARNING][4082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.687 [INFO][4082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.689 [INFO][4082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:36.693319 containerd[1462]: 2025-01-30 13:41:36.691 [INFO][4074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:36.693756 containerd[1462]: time="2025-01-30T13:41:36.693519504Z" level=info msg="TearDown network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" successfully" Jan 30 13:41:36.693756 containerd[1462]: time="2025-01-30T13:41:36.693549901Z" level=info msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" returns successfully" Jan 30 13:41:36.694218 containerd[1462]: time="2025-01-30T13:41:36.694188390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-r7hdj,Uid:f3d49e37-12b3-413c-8b6b-5cfccd4b4b80,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:41:36.696251 systemd[1]: run-netns-cni\x2d93e7db1d\x2d7f19\x2dfed1\x2d5fc8\x2d8f22359c909a.mount: Deactivated successfully. Jan 30 13:41:36.825991 systemd-networkd[1403]: cali44decc22375: Link UP Jan 30 13:41:36.826952 systemd-networkd[1403]: cali44decc22375: Gained carrier Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.769 [INFO][4090] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0 calico-apiserver-7b5f976dbf- calico-apiserver f3d49e37-12b3-413c-8b6b-5cfccd4b4b80 838 0 2025-01-30 13:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5f976dbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b5f976dbf-r7hdj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44decc22375 [] []}} ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.769 [INFO][4090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.794 [INFO][4103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" HandleID="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.800 [INFO][4103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" HandleID="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000505e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b5f976dbf-r7hdj", "timestamp":"2025-01-30 13:41:36.794279716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.800 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.800 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.800 [INFO][4103] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.802 [INFO][4103] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.805 [INFO][4103] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.808 [INFO][4103] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.810 [INFO][4103] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.811 [INFO][4103] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.811 [INFO][4103] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.812 [INFO][4103] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9 Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.815 [INFO][4103] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.821 [INFO][4103] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.821 [INFO][4103] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" host="localhost" Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.821 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:36.840394 containerd[1462]: 2025-01-30 13:41:36.821 [INFO][4103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" HandleID="k8s-pod-network.508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.824 [INFO][4090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b5f976dbf-r7hdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44decc22375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.824 [INFO][4090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.824 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44decc22375 ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.826 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.826 [INFO][4090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9", Pod:"calico-apiserver-7b5f976dbf-r7hdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44decc22375", MAC:"22:64:e9:63:ca:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:36.840959 containerd[1462]: 2025-01-30 13:41:36.836 [INFO][4090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f976dbf-r7hdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:36.864800 containerd[1462]: time="2025-01-30T13:41:36.864708796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:36.864800 containerd[1462]: time="2025-01-30T13:41:36.864757818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:36.864800 containerd[1462]: time="2025-01-30T13:41:36.864768217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:36.865000 containerd[1462]: time="2025-01-30T13:41:36.864849771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:36.885636 systemd[1]: Started cri-containerd-508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9.scope - libcontainer container 508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9. Jan 30 13:41:36.897528 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:36.925929 containerd[1462]: time="2025-01-30T13:41:36.925878664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f976dbf-r7hdj,Uid:f3d49e37-12b3-413c-8b6b-5cfccd4b4b80,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9\"" Jan 30 13:41:37.182661 systemd-networkd[1403]: cali4b40295aa86: Gained IPv6LL Jan 30 13:41:38.140535 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:40378.service - OpenSSH per-connection server daemon (10.0.0.1:40378). Jan 30 13:41:38.229796 containerd[1462]: time="2025-01-30T13:41:38.229745619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:38.230426 containerd[1462]: time="2025-01-30T13:41:38.230367417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:41:38.231935 containerd[1462]: time="2025-01-30T13:41:38.231905835Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:38.233914 containerd[1462]: time="2025-01-30T13:41:38.233883529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:38.234592 containerd[1462]: time="2025-01-30T13:41:38.234557465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.362386291s" Jan 30 13:41:38.234656 containerd[1462]: time="2025-01-30T13:41:38.234593042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:41:38.236822 containerd[1462]: time="2025-01-30T13:41:38.236791320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:41:38.238303 containerd[1462]: time="2025-01-30T13:41:38.237723000Z" level=info msg="CreateContainer within sandbox \"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:41:38.238541 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:38.243235 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:38.248624 systemd-logind[1449]: New session 10 of user core. Jan 30 13:41:38.257642 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:41:38.262904 containerd[1462]: time="2025-01-30T13:41:38.262855236Z" level=info msg="CreateContainer within sandbox \"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d48d8cb2c479643d7bbe1617d992a0f2e462539604a074cd347c59348d00ffef\"" Jan 30 13:41:38.263389 containerd[1462]: time="2025-01-30T13:41:38.263361457Z" level=info msg="StartContainer for \"d48d8cb2c479643d7bbe1617d992a0f2e462539604a074cd347c59348d00ffef\"" Jan 30 13:41:38.269924 kubelet[2501]: I0130 13:41:38.269715 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:38.270633 kubelet[2501]: E0130 13:41:38.270111 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:38.296642 systemd[1]: Started cri-containerd-d48d8cb2c479643d7bbe1617d992a0f2e462539604a074cd347c59348d00ffef.scope - libcontainer container d48d8cb2c479643d7bbe1617d992a0f2e462539604a074cd347c59348d00ffef. Jan 30 13:41:38.342109 containerd[1462]: time="2025-01-30T13:41:38.342051562Z" level=info msg="StartContainer for \"d48d8cb2c479643d7bbe1617d992a0f2e462539604a074cd347c59348d00ffef\" returns successfully" Jan 30 13:41:38.426446 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:38.436318 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:40378.service: Deactivated successfully. Jan 30 13:41:38.437992 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:41:38.439192 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:41:38.447778 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:40392.service - OpenSSH per-connection server daemon (10.0.0.1:40392). Jan 30 13:41:38.449607 systemd-logind[1449]: Removed session 10. Jan 30 13:41:38.479230 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 40392 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:38.480786 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:38.485101 systemd-logind[1449]: New session 11 of user core. Jan 30 13:41:38.494621 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:41:38.559250 containerd[1462]: time="2025-01-30T13:41:38.558962316Z" level=info msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" Jan 30 13:41:38.560931 containerd[1462]: time="2025-01-30T13:41:38.559730990Z" level=info msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" Jan 30 13:41:38.560931 containerd[1462]: time="2025-01-30T13:41:38.560114991Z" level=info msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" Jan 30 13:41:38.560931 containerd[1462]: time="2025-01-30T13:41:38.560122355Z" level=info msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" Jan 30 13:41:38.683867 containerd[1462]: time="2025-01-30T13:41:38.683740713Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:38.684682 containerd[1462]: time="2025-01-30T13:41:38.684524585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:41:38.709586 containerd[1462]: time="2025-01-30T13:41:38.707709855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 470.885523ms" Jan 30 13:41:38.709586 containerd[1462]: time="2025-01-30T13:41:38.709566051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:41:38.710219 sshd[4280]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.635 [INFO][4362] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.636 [INFO][4362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" iface="eth0" netns="/var/run/netns/cni-c5864257-6fc5-5d74-dcb6-0fc17bfbd36c" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.636 [INFO][4362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" iface="eth0" netns="/var/run/netns/cni-c5864257-6fc5-5d74-dcb6-0fc17bfbd36c" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.636 [INFO][4362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" iface="eth0" netns="/var/run/netns/cni-c5864257-6fc5-5d74-dcb6-0fc17bfbd36c" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.636 [INFO][4362] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.636 [INFO][4362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.685 [INFO][4385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.685 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.685 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.695 [WARNING][4385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.695 [INFO][4385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.697 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:38.716262 containerd[1462]: 2025-01-30 13:41:38.713 [INFO][4362] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:38.716669 containerd[1462]: time="2025-01-30T13:41:38.716445951Z" level=info msg="TearDown network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" successfully" Jan 30 13:41:38.716669 containerd[1462]: time="2025-01-30T13:41:38.716498059Z" level=info msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" returns successfully" Jan 30 13:41:38.719138 containerd[1462]: time="2025-01-30T13:41:38.718520467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g6zr,Uid:23f7c933-d0e1-4d42-a085-53875d9b091a,Namespace:calico-system,Attempt:1,}" Jan 30 13:41:38.722292 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:40392.service: Deactivated successfully. Jan 30 13:41:38.726546 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:41:38.728291 containerd[1462]: time="2025-01-30T13:41:38.727540776Z" level=info msg="CreateContainer within sandbox \"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:41:38.730006 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:41:38.741807 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:40400.service - OpenSSH per-connection server daemon (10.0.0.1:40400). Jan 30 13:41:38.742413 systemd-logind[1449]: Removed session 11. Jan 30 13:41:38.766752 containerd[1462]: time="2025-01-30T13:41:38.766706607Z" level=info msg="CreateContainer within sandbox \"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"618dffad84bf0598d4ddd90e560e406ffab72b2e7968ba7739edbfe1f6cac57b\"" Jan 30 13:41:38.770024 containerd[1462]: time="2025-01-30T13:41:38.769995022Z" level=info msg="StartContainer for \"618dffad84bf0598d4ddd90e560e406ffab72b2e7968ba7739edbfe1f6cac57b\"" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.665 [INFO][4357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.668 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" iface="eth0" netns="/var/run/netns/cni-523fef0d-39cc-e7dd-fea2-fe030cd19752" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.669 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" iface="eth0" netns="/var/run/netns/cni-523fef0d-39cc-e7dd-fea2-fe030cd19752" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.670 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" iface="eth0" netns="/var/run/netns/cni-523fef0d-39cc-e7dd-fea2-fe030cd19752" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.741 [INFO][4395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.742 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.742 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.749 [WARNING][4395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.749 [INFO][4395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.753 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:38.783449 containerd[1462]: 2025-01-30 13:41:38.757 [INFO][4357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:38.784969 containerd[1462]: time="2025-01-30T13:41:38.783907219Z" level=info msg="TearDown network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" successfully" Jan 30 13:41:38.784969 containerd[1462]: time="2025-01-30T13:41:38.784243540Z" level=info msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" returns successfully" Jan 30 13:41:38.785139 kubelet[2501]: E0130 13:41:38.784668 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:38.785535 containerd[1462]: time="2025-01-30T13:41:38.785299774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6wdhf,Uid:7cc98b6c-2623-4585-9bb5-117c79a9fe02,Namespace:kube-system,Attempt:1,}" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.666 [INFO][4351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.669 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" iface="eth0" netns="/var/run/netns/cni-db90411d-550f-5044-1fc6-1fa2bc3f8f4f" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" iface="eth0" netns="/var/run/netns/cni-db90411d-550f-5044-1fc6-1fa2bc3f8f4f" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" iface="eth0" netns="/var/run/netns/cni-db90411d-550f-5044-1fc6-1fa2bc3f8f4f" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.748 [INFO][4394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.748 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.753 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.761 [WARNING][4394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.761 [INFO][4394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.762 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:38.793727 containerd[1462]: 2025-01-30 13:41:38.781 [INFO][4351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:38.794707 containerd[1462]: time="2025-01-30T13:41:38.794675331Z" level=info msg="TearDown network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" successfully" Jan 30 13:41:38.794707 containerd[1462]: time="2025-01-30T13:41:38.794703233Z" level=info msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" returns successfully" Jan 30 13:41:38.796524 containerd[1462]: time="2025-01-30T13:41:38.795422163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bbf48d84-qbktp,Uid:4f7442c3-8bdd-40c7-a454-8cfac24075e7,Namespace:calico-system,Attempt:1,}" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.673 [INFO][4356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.673 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" iface="eth0" netns="/var/run/netns/cni-573e5c12-52a4-19eb-5022-0c742fd5f8ff" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.674 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" iface="eth0" netns="/var/run/netns/cni-573e5c12-52a4-19eb-5022-0c742fd5f8ff" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.676 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" iface="eth0" netns="/var/run/netns/cni-573e5c12-52a4-19eb-5022-0c742fd5f8ff" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.677 [INFO][4356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.677 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.763 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.764 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.764 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.785 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.785 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.788 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:38.801475 containerd[1462]: 2025-01-30 13:41:38.795 [INFO][4356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:38.802128 containerd[1462]: time="2025-01-30T13:41:38.801611766Z" level=info msg="TearDown network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" successfully" Jan 30 13:41:38.802128 containerd[1462]: time="2025-01-30T13:41:38.801628428Z" level=info msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" returns successfully" Jan 30 13:41:38.802174 kubelet[2501]: E0130 13:41:38.801849 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:38.803047 containerd[1462]: time="2025-01-30T13:41:38.802992469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgt7b,Uid:5286e518-a601-45d8-b742-fd5b70c8b40f,Namespace:kube-system,Attempt:1,}" Jan 30 13:41:38.804824 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 40400 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:38.808095 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:38.815686 systemd[1]: Started cri-containerd-618dffad84bf0598d4ddd90e560e406ffab72b2e7968ba7739edbfe1f6cac57b.scope - libcontainer container 618dffad84bf0598d4ddd90e560e406ffab72b2e7968ba7739edbfe1f6cac57b. Jan 30 13:41:38.821016 systemd-logind[1449]: New session 12 of user core. Jan 30 13:41:38.822176 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:41:38.847101 systemd-networkd[1403]: cali44decc22375: Gained IPv6LL Jan 30 13:41:38.905965 containerd[1462]: time="2025-01-30T13:41:38.905361863Z" level=info msg="StartContainer for \"618dffad84bf0598d4ddd90e560e406ffab72b2e7968ba7739edbfe1f6cac57b\" returns successfully" Jan 30 13:41:38.931111 kubelet[2501]: E0130 13:41:38.930715 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:38.939179 kubelet[2501]: I0130 13:41:38.937729 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b5f976dbf-5c8cv" podStartSLOduration=26.573722552 podStartE2EDuration="28.93770953s" podCreationTimestamp="2025-01-30 13:41:10 +0000 UTC" firstStartedPulling="2025-01-30 13:41:35.871776502 +0000 UTC m=+39.400948794" lastFinishedPulling="2025-01-30 13:41:38.23576348 +0000 UTC m=+41.764935772" observedRunningTime="2025-01-30 13:41:38.937088193 +0000 UTC m=+42.466260485" watchObservedRunningTime="2025-01-30 13:41:38.93770953 +0000 UTC m=+42.466881822" Jan 30 13:41:39.027014 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:39.031807 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:40400.service: Deactivated successfully. Jan 30 13:41:39.034604 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:41:39.035627 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:41:39.038911 systemd-logind[1449]: Removed session 12. Jan 30 13:41:39.054596 systemd-networkd[1403]: calic3377850fee: Link UP Jan 30 13:41:39.055244 systemd-networkd[1403]: calic3377850fee: Gained carrier Jan 30 13:41:39.065529 kubelet[2501]: I0130 13:41:39.064765 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b5f976dbf-r7hdj" podStartSLOduration=27.274532243 podStartE2EDuration="29.06474333s" podCreationTimestamp="2025-01-30 13:41:10 +0000 UTC" firstStartedPulling="2025-01-30 13:41:36.927446058 +0000 UTC m=+40.456618350" lastFinishedPulling="2025-01-30 13:41:38.717657145 +0000 UTC m=+42.246829437" observedRunningTime="2025-01-30 13:41:38.958675442 +0000 UTC m=+42.487847724" watchObservedRunningTime="2025-01-30 13:41:39.06474333 +0000 UTC m=+42.593915722" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.811 [INFO][4424] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9g6zr-eth0 csi-node-driver- calico-system 23f7c933-d0e1-4d42-a085-53875d9b091a 868 0 2025-01-30 13:41:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9g6zr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic3377850fee [] []}} ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.812 [INFO][4424] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.883 [INFO][4457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" HandleID="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.907 [INFO][4457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" HandleID="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053ce30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9g6zr", "timestamp":"2025-01-30 13:41:38.883711234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.907 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.908 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.909 [INFO][4457] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:38.915 [INFO][4457] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.004 [INFO][4457] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.017 [INFO][4457] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.020 [INFO][4457] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.022 [INFO][4457] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.022 [INFO][4457] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.024 [INFO][4457] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2 Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.032 [INFO][4457] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.044 [INFO][4457] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.044 [INFO][4457] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" host="localhost" Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.044 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:39.070214 containerd[1462]: 2025-01-30 13:41:39.044 [INFO][4457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" HandleID="k8s-pod-network.fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.050 [INFO][4424] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9g6zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f7c933-d0e1-4d42-a085-53875d9b091a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9g6zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3377850fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.050 [INFO][4424] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.050 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3377850fee ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.054 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.055 [INFO][4424] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9g6zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f7c933-d0e1-4d42-a085-53875d9b091a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2", Pod:"csi-node-driver-9g6zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3377850fee", MAC:"02:08:03:0b:e0:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.071156 containerd[1462]: 2025-01-30 13:41:39.066 [INFO][4424] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2" Namespace="calico-system" Pod="csi-node-driver-9g6zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:39.131708 systemd-networkd[1403]: cali7aead9e4f66: Link UP Jan 30 13:41:39.131976 systemd-networkd[1403]: cali7aead9e4f66: Gained carrier Jan 30 13:41:39.137044 containerd[1462]: time="2025-01-30T13:41:39.135698084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:39.137044 containerd[1462]: time="2025-01-30T13:41:39.135751736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:39.137044 containerd[1462]: time="2025-01-30T13:41:39.135776783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.137044 containerd[1462]: time="2025-01-30T13:41:39.135902148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:38.900 [INFO][4480] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0 calico-kube-controllers-58bbf48d84- calico-system 4f7442c3-8bdd-40c7-a454-8cfac24075e7 870 0 2025-01-30 13:41:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58bbf48d84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58bbf48d84-qbktp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7aead9e4f66 [] []}} ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:38.900 [INFO][4480] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:38.982 [INFO][4531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" HandleID="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.010 [INFO][4531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" HandleID="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000304b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58bbf48d84-qbktp", "timestamp":"2025-01-30 13:41:38.981709358 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.011 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.045 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.045 [INFO][4531] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.047 [INFO][4531] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.105 [INFO][4531] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.111 [INFO][4531] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.112 [INFO][4531] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.114 [INFO][4531] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.114 [INFO][4531] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.116 [INFO][4531] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.119 [INFO][4531] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4531] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4531] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" host="localhost" Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:39.148157 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" HandleID="k8s-pod-network.cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.128 [INFO][4480] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0", GenerateName:"calico-kube-controllers-58bbf48d84-", Namespace:"calico-system", SelfLink:"", UID:"4f7442c3-8bdd-40c7-a454-8cfac24075e7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bbf48d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58bbf48d84-qbktp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7aead9e4f66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.129 [INFO][4480] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.129 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7aead9e4f66 ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.131 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.132 [INFO][4480] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0", GenerateName:"calico-kube-controllers-58bbf48d84-", Namespace:"calico-system", SelfLink:"", UID:"4f7442c3-8bdd-40c7-a454-8cfac24075e7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bbf48d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f", Pod:"calico-kube-controllers-58bbf48d84-qbktp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7aead9e4f66", MAC:"62:3a:a1:4e:80:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.148729 containerd[1462]: 2025-01-30 13:41:39.140 [INFO][4480] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f" Namespace="calico-system" Pod="calico-kube-controllers-58bbf48d84-qbktp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:39.169482 systemd[1]: Started cri-containerd-fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2.scope - libcontainer container fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2. Jan 30 13:41:39.178402 containerd[1462]: time="2025-01-30T13:41:39.178293308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:39.178402 containerd[1462]: time="2025-01-30T13:41:39.178371495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:39.178697 containerd[1462]: time="2025-01-30T13:41:39.178603851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.179643 containerd[1462]: time="2025-01-30T13:41:39.179527095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.186376 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:39.200700 systemd[1]: Started cri-containerd-cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f.scope - libcontainer container cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f. Jan 30 13:41:39.203429 containerd[1462]: time="2025-01-30T13:41:39.203387970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g6zr,Uid:23f7c933-d0e1-4d42-a085-53875d9b091a,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2\"" Jan 30 13:41:39.207492 containerd[1462]: time="2025-01-30T13:41:39.207463562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:41:39.221880 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:39.250327 systemd-networkd[1403]: calicb25268cbf7: Link UP Jan 30 13:41:39.250601 systemd-networkd[1403]: calicb25268cbf7: Gained carrier Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:38.913 [INFO][4482] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0 coredns-668d6bf9bc- kube-system 5286e518-a601-45d8-b742-fd5b70c8b40f 871 0 2025-01-30 13:41:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xgt7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb25268cbf7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:38.914 [INFO][4482] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.006 [INFO][4540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" HandleID="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.016 [INFO][4540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" HandleID="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005d8cd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xgt7b", "timestamp":"2025-01-30 13:41:39.003439627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.016 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.125 [INFO][4540] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.149 [INFO][4540] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.207 [INFO][4540] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.223 [INFO][4540] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.225 [INFO][4540] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.227 [INFO][4540] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.227 [INFO][4540] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.229 [INFO][4540] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714 Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.235 [INFO][4540] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4540] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4540] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" host="localhost" Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:39.269907 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" HandleID="k8s-pod-network.7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.245 [INFO][4482] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5286e518-a601-45d8-b742-fd5b70c8b40f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xgt7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb25268cbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.246 [INFO][4482] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.246 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb25268cbf7 ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.250 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.250 [INFO][4482] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5286e518-a601-45d8-b742-fd5b70c8b40f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714", Pod:"coredns-668d6bf9bc-xgt7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb25268cbf7", MAC:"3e:7a:34:f1:23:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.270657 containerd[1462]: 2025-01-30 13:41:39.261 [INFO][4482] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714" Namespace="kube-system" Pod="coredns-668d6bf9bc-xgt7b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:39.273537 systemd[1]: run-netns-cni\x2dc5864257\x2d6fc5\x2d5d74\x2ddcb6\x2d0fc17bfbd36c.mount: Deactivated successfully. Jan 30 13:41:39.273657 systemd[1]: run-netns-cni\x2d523fef0d\x2d39cc\x2de7dd\x2dfea2\x2dfe030cd19752.mount: Deactivated successfully. Jan 30 13:41:39.273729 systemd[1]: run-netns-cni\x2ddb90411d\x2d550f\x2d5044\x2d1fc6\x2d1fa2bc3f8f4f.mount: Deactivated successfully. Jan 30 13:41:39.273870 systemd[1]: run-netns-cni\x2d573e5c12\x2d52a4\x2d19eb\x2d5022\x2d0c742fd5f8ff.mount: Deactivated successfully. Jan 30 13:41:39.276922 containerd[1462]: time="2025-01-30T13:41:39.276003193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bbf48d84-qbktp,Uid:4f7442c3-8bdd-40c7-a454-8cfac24075e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f\"" Jan 30 13:41:39.300600 containerd[1462]: time="2025-01-30T13:41:39.298042507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:39.300600 containerd[1462]: time="2025-01-30T13:41:39.298746860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:39.300600 containerd[1462]: time="2025-01-30T13:41:39.298762299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.300600 containerd[1462]: time="2025-01-30T13:41:39.298867015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.318701 systemd[1]: Started cri-containerd-7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714.scope - libcontainer container 7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714. Jan 30 13:41:39.332195 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:39.339105 systemd-networkd[1403]: calif13308af907: Link UP Jan 30 13:41:39.339298 systemd-networkd[1403]: calif13308af907: Gained carrier Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:38.940 [INFO][4472] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0 coredns-668d6bf9bc- kube-system 7cc98b6c-2623-4585-9bb5-117c79a9fe02 869 0 2025-01-30 13:41:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6wdhf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif13308af907 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:38.940 [INFO][4472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.029 [INFO][4549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" HandleID="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.107 [INFO][4549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" HandleID="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6wdhf", "timestamp":"2025-01-30 13:41:39.029088207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.107 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.241 [INFO][4549] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.251 [INFO][4549] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.306 [INFO][4549] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.316 [INFO][4549] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.318 [INFO][4549] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.320 [INFO][4549] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.320 [INFO][4549] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.321 [INFO][4549] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.325 [INFO][4549] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.331 [INFO][4549] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.331 [INFO][4549] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" host="localhost" Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.331 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:39.357294 containerd[1462]: 2025-01-30 13:41:39.331 [INFO][4549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" HandleID="k8s-pod-network.432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.337 [INFO][4472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc98b6c-2623-4585-9bb5-117c79a9fe02", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6wdhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif13308af907", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.337 [INFO][4472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.337 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif13308af907 ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.339 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.341 [INFO][4472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc98b6c-2623-4585-9bb5-117c79a9fe02", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa", Pod:"coredns-668d6bf9bc-6wdhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif13308af907", MAC:"f6:1a:c6:8f:43:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:39.357878 containerd[1462]: 2025-01-30 13:41:39.349 [INFO][4472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-6wdhf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:39.369217 containerd[1462]: time="2025-01-30T13:41:39.369163375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgt7b,Uid:5286e518-a601-45d8-b742-fd5b70c8b40f,Namespace:kube-system,Attempt:1,} returns sandbox id \"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714\"" Jan 30 13:41:39.371224 kubelet[2501]: E0130 13:41:39.370819 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:39.374646 containerd[1462]: time="2025-01-30T13:41:39.374610002Z" level=info msg="CreateContainer within sandbox \"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:41:39.389807 containerd[1462]: time="2025-01-30T13:41:39.389641197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:39.389807 containerd[1462]: time="2025-01-30T13:41:39.389716519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:39.389807 containerd[1462]: time="2025-01-30T13:41:39.389730665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.389989 containerd[1462]: time="2025-01-30T13:41:39.389837405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:39.400675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658144936.mount: Deactivated successfully. Jan 30 13:41:39.406163 containerd[1462]: time="2025-01-30T13:41:39.406126322Z" level=info msg="CreateContainer within sandbox \"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83bb6f41d82f3a2f1bc3846e0a518b6125eb7c11bee350bacd46366ed64dfd96\"" Jan 30 13:41:39.406950 containerd[1462]: time="2025-01-30T13:41:39.406906787Z" level=info msg="StartContainer for \"83bb6f41d82f3a2f1bc3846e0a518b6125eb7c11bee350bacd46366ed64dfd96\"" Jan 30 13:41:39.413686 systemd[1]: Started cri-containerd-432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa.scope - libcontainer container 432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa. Jan 30 13:41:39.432205 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:41:39.442648 systemd[1]: Started cri-containerd-83bb6f41d82f3a2f1bc3846e0a518b6125eb7c11bee350bacd46366ed64dfd96.scope - libcontainer container 83bb6f41d82f3a2f1bc3846e0a518b6125eb7c11bee350bacd46366ed64dfd96. Jan 30 13:41:39.463859 containerd[1462]: time="2025-01-30T13:41:39.463687728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6wdhf,Uid:7cc98b6c-2623-4585-9bb5-117c79a9fe02,Namespace:kube-system,Attempt:1,} returns sandbox id \"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa\"" Jan 30 13:41:39.465262 kubelet[2501]: E0130 13:41:39.465168 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:39.470042 containerd[1462]: time="2025-01-30T13:41:39.470011162Z" level=info msg="CreateContainer within sandbox \"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:41:39.479304 containerd[1462]: time="2025-01-30T13:41:39.479202501Z" level=info msg="StartContainer for \"83bb6f41d82f3a2f1bc3846e0a518b6125eb7c11bee350bacd46366ed64dfd96\" returns successfully" Jan 30 13:41:39.489167 containerd[1462]: time="2025-01-30T13:41:39.489131936Z" level=info msg="CreateContainer within sandbox \"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1437ceb27d71094ceed61fa9839b2410d553a965d9f9b65449227d283f8422da\"" Jan 30 13:41:39.490032 containerd[1462]: time="2025-01-30T13:41:39.489792206Z" level=info msg="StartContainer for \"1437ceb27d71094ceed61fa9839b2410d553a965d9f9b65449227d283f8422da\"" Jan 30 13:41:39.521648 systemd[1]: Started cri-containerd-1437ceb27d71094ceed61fa9839b2410d553a965d9f9b65449227d283f8422da.scope - libcontainer container 1437ceb27d71094ceed61fa9839b2410d553a965d9f9b65449227d283f8422da. Jan 30 13:41:39.587949 containerd[1462]: time="2025-01-30T13:41:39.587897384Z" level=info msg="StartContainer for \"1437ceb27d71094ceed61fa9839b2410d553a965d9f9b65449227d283f8422da\" returns successfully" Jan 30 13:41:39.933792 kubelet[2501]: E0130 13:41:39.933667 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:39.936678 kubelet[2501]: I0130 13:41:39.936646 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:39.938596 kubelet[2501]: I0130 13:41:39.938575 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:39.938838 kubelet[2501]: E0130 13:41:39.938810 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:39.943538 kubelet[2501]: I0130 13:41:39.943459 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6wdhf" podStartSLOduration=36.943440866 podStartE2EDuration="36.943440866s" podCreationTimestamp="2025-01-30 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:39.942582885 +0000 UTC m=+43.471755177" watchObservedRunningTime="2025-01-30 13:41:39.943440866 +0000 UTC m=+43.472613158" Jan 30 13:41:39.977384 kubelet[2501]: I0130 13:41:39.976776 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xgt7b" podStartSLOduration=36.97675403 podStartE2EDuration="36.97675403s" podCreationTimestamp="2025-01-30 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:39.976350572 +0000 UTC m=+43.505522864" watchObservedRunningTime="2025-01-30 13:41:39.97675403 +0000 UTC m=+43.505926312" Jan 30 13:41:40.262965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086859279.mount: Deactivated successfully. Jan 30 13:41:40.766699 systemd-networkd[1403]: cali7aead9e4f66: Gained IPv6LL Jan 30 13:41:40.769568 systemd-networkd[1403]: calicb25268cbf7: Gained IPv6LL Jan 30 13:41:40.830614 systemd-networkd[1403]: calif13308af907: Gained IPv6LL Jan 30 13:41:40.831116 systemd-networkd[1403]: calic3377850fee: Gained IPv6LL Jan 30 13:41:40.940644 kubelet[2501]: E0130 13:41:40.940614 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:40.941028 kubelet[2501]: E0130 13:41:40.940854 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:40.961193 containerd[1462]: time="2025-01-30T13:41:40.961146821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:40.961801 containerd[1462]: time="2025-01-30T13:41:40.961738472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:41:40.962781 containerd[1462]: time="2025-01-30T13:41:40.962749170Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:40.964695 containerd[1462]: time="2025-01-30T13:41:40.964657092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:40.965297 containerd[1462]: time="2025-01-30T13:41:40.965258502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.757661998s" Jan 30 13:41:40.965297 containerd[1462]: time="2025-01-30T13:41:40.965286544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:41:40.966095 containerd[1462]: time="2025-01-30T13:41:40.966060197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:41:40.967199 containerd[1462]: time="2025-01-30T13:41:40.967161013Z" level=info msg="CreateContainer within sandbox \"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:41:40.983778 containerd[1462]: time="2025-01-30T13:41:40.983742897Z" level=info msg="CreateContainer within sandbox \"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be37b571e8073636c2ef0c6b29f89f06e366b6804d266ed1ab5de44af79bd0aa\"" Jan 30 13:41:40.984246 containerd[1462]: time="2025-01-30T13:41:40.984210075Z" level=info msg="StartContainer for \"be37b571e8073636c2ef0c6b29f89f06e366b6804d266ed1ab5de44af79bd0aa\"" Jan 30 13:41:41.016634 systemd[1]: Started cri-containerd-be37b571e8073636c2ef0c6b29f89f06e366b6804d266ed1ab5de44af79bd0aa.scope - libcontainer container be37b571e8073636c2ef0c6b29f89f06e366b6804d266ed1ab5de44af79bd0aa. Jan 30 13:41:41.046689 containerd[1462]: time="2025-01-30T13:41:41.046572126Z" level=info msg="StartContainer for \"be37b571e8073636c2ef0c6b29f89f06e366b6804d266ed1ab5de44af79bd0aa\" returns successfully" Jan 30 13:41:41.944133 kubelet[2501]: E0130 13:41:41.944098 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:41.944133 kubelet[2501]: E0130 13:41:41.944134 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:41:42.764618 containerd[1462]: time="2025-01-30T13:41:42.764563982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:42.765470 containerd[1462]: time="2025-01-30T13:41:42.765394601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:41:42.766671 containerd[1462]: time="2025-01-30T13:41:42.766624080Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:42.772901 containerd[1462]: time="2025-01-30T13:41:42.772842253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:42.773678 containerd[1462]: time="2025-01-30T13:41:42.773595667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.807508159s" Jan 30 13:41:42.773678 containerd[1462]: time="2025-01-30T13:41:42.773681388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:41:42.774547 containerd[1462]: time="2025-01-30T13:41:42.774423832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:41:42.782308 containerd[1462]: time="2025-01-30T13:41:42.782261215Z" level=info msg="CreateContainer within sandbox \"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:41:42.798187 containerd[1462]: time="2025-01-30T13:41:42.798143590Z" level=info msg="CreateContainer within sandbox \"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3c272f6054aa7fd5270ba217fe78960b8d9ec54e661ecd4ffc7b398c8e4f55ed\"" Jan 30 13:41:42.798918 containerd[1462]: time="2025-01-30T13:41:42.798887727Z" level=info msg="StartContainer for \"3c272f6054aa7fd5270ba217fe78960b8d9ec54e661ecd4ffc7b398c8e4f55ed\"" Jan 30 13:41:42.828720 systemd[1]: Started cri-containerd-3c272f6054aa7fd5270ba217fe78960b8d9ec54e661ecd4ffc7b398c8e4f55ed.scope - libcontainer container 3c272f6054aa7fd5270ba217fe78960b8d9ec54e661ecd4ffc7b398c8e4f55ed. Jan 30 13:41:43.388016 containerd[1462]: time="2025-01-30T13:41:43.387956933Z" level=info msg="StartContainer for \"3c272f6054aa7fd5270ba217fe78960b8d9ec54e661ecd4ffc7b398c8e4f55ed\" returns successfully" Jan 30 13:41:44.044166 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:47020.service - OpenSSH per-connection server daemon (10.0.0.1:47020). Jan 30 13:41:44.097174 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:44.098905 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:44.103769 systemd-logind[1449]: New session 13 of user core. Jan 30 13:41:44.111666 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:41:44.237243 sshd[4968]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:44.241681 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:47020.service: Deactivated successfully. Jan 30 13:41:44.243474 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:41:44.244063 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:41:44.244988 systemd-logind[1449]: Removed session 13. Jan 30 13:41:44.458222 kubelet[2501]: I0130 13:41:44.456189 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58bbf48d84-qbktp" podStartSLOduration=30.959851262 podStartE2EDuration="34.4561677s" podCreationTimestamp="2025-01-30 13:41:10 +0000 UTC" firstStartedPulling="2025-01-30 13:41:39.278018006 +0000 UTC m=+42.807190299" lastFinishedPulling="2025-01-30 13:41:42.774334444 +0000 UTC m=+46.303506737" observedRunningTime="2025-01-30 13:41:44.433155476 +0000 UTC m=+47.962327768" watchObservedRunningTime="2025-01-30 13:41:44.4561677 +0000 UTC m=+47.985339992" Jan 30 13:41:44.622686 containerd[1462]: time="2025-01-30T13:41:44.622627623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:44.623462 containerd[1462]: time="2025-01-30T13:41:44.623414721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:41:44.624620 containerd[1462]: time="2025-01-30T13:41:44.624589897Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:44.626957 containerd[1462]: time="2025-01-30T13:41:44.626854337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:44.627562 containerd[1462]: time="2025-01-30T13:41:44.627494038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.853041031s" Jan 30 13:41:44.627562 containerd[1462]: time="2025-01-30T13:41:44.627551285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:41:44.629669 containerd[1462]: time="2025-01-30T13:41:44.629643282Z" level=info msg="CreateContainer within sandbox \"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:41:44.649648 containerd[1462]: time="2025-01-30T13:41:44.649597466Z" level=info msg="CreateContainer within sandbox \"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e0b7d1d4a82265d9c041f9f3cfdfc66daa90abc70866b9d2e5657c8ccffd8b40\"" Jan 30 13:41:44.650411 containerd[1462]: time="2025-01-30T13:41:44.650372591Z" level=info msg="StartContainer for \"e0b7d1d4a82265d9c041f9f3cfdfc66daa90abc70866b9d2e5657c8ccffd8b40\"" Jan 30 13:41:44.683300 systemd[1]: Started cri-containerd-e0b7d1d4a82265d9c041f9f3cfdfc66daa90abc70866b9d2e5657c8ccffd8b40.scope - libcontainer container e0b7d1d4a82265d9c041f9f3cfdfc66daa90abc70866b9d2e5657c8ccffd8b40. Jan 30 13:41:44.720136 containerd[1462]: time="2025-01-30T13:41:44.719778665Z" level=info msg="StartContainer for \"e0b7d1d4a82265d9c041f9f3cfdfc66daa90abc70866b9d2e5657c8ccffd8b40\" returns successfully" Jan 30 13:41:45.409123 kubelet[2501]: I0130 13:41:45.408131 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9g6zr" podStartSLOduration=29.986098412 podStartE2EDuration="35.408114074s" podCreationTimestamp="2025-01-30 13:41:10 +0000 UTC" firstStartedPulling="2025-01-30 13:41:39.206377252 +0000 UTC m=+42.735549545" lastFinishedPulling="2025-01-30 13:41:44.628392925 +0000 UTC m=+48.157565207" observedRunningTime="2025-01-30 13:41:45.407944236 +0000 UTC m=+48.937116528" watchObservedRunningTime="2025-01-30 13:41:45.408114074 +0000 UTC m=+48.937286366" Jan 30 13:41:45.621498 kubelet[2501]: I0130 13:41:45.621468 2501 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:41:45.621498 kubelet[2501]: I0130 13:41:45.621498 2501 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:41:49.248528 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Jan 30 13:41:49.288324 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:49.289987 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:49.293820 systemd-logind[1449]: New session 14 of user core. Jan 30 13:41:49.302649 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:41:49.426060 sshd[5053]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:49.432462 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:47028.service: Deactivated successfully. Jan 30 13:41:49.434744 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:41:49.435448 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:41:49.436416 systemd-logind[1449]: Removed session 14. Jan 30 13:41:50.680444 kubelet[2501]: I0130 13:41:50.680372 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:54.442970 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:38730.service - OpenSSH per-connection server daemon (10.0.0.1:38730). Jan 30 13:41:54.479567 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 38730 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:54.481381 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:54.486392 systemd-logind[1449]: New session 15 of user core. Jan 30 13:41:54.491645 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:41:54.607194 sshd[5078]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:54.612206 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:38730.service: Deactivated successfully. Jan 30 13:41:54.614089 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:41:54.614783 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:41:54.615674 systemd-logind[1449]: Removed session 15. Jan 30 13:41:56.545964 containerd[1462]: time="2025-01-30T13:41:56.545922322Z" level=info msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.578 [WARNING][5106] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5286e518-a601-45d8-b742-fd5b70c8b40f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714", Pod:"coredns-668d6bf9bc-xgt7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb25268cbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.579 [INFO][5106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.579 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" iface="eth0" netns="" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.579 [INFO][5106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.579 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.599 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.599 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.599 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.604 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.604 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.605 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:56.610085 containerd[1462]: 2025-01-30 13:41:56.607 [INFO][5106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.610558 containerd[1462]: time="2025-01-30T13:41:56.610130601Z" level=info msg="TearDown network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" successfully" Jan 30 13:41:56.610558 containerd[1462]: time="2025-01-30T13:41:56.610162561Z" level=info msg="StopPodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" returns successfully" Jan 30 13:41:56.610752 containerd[1462]: time="2025-01-30T13:41:56.610725518Z" level=info msg="RemovePodSandbox for \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" Jan 30 13:41:56.614416 containerd[1462]: time="2025-01-30T13:41:56.614393378Z" level=info msg="Forcibly stopping sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\"" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.650 [WARNING][5138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5286e518-a601-45d8-b742-fd5b70c8b40f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bff75fa81320424cd176e0942bf2c08c8261616e6374f9432ced6fe39695714", Pod:"coredns-668d6bf9bc-xgt7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb25268cbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.650 [INFO][5138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.650 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" iface="eth0" netns="" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.650 [INFO][5138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.650 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.669 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.669 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.670 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.674 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.674 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" HandleID="k8s-pod-network.734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Workload="localhost-k8s-coredns--668d6bf9bc--xgt7b-eth0" Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.675 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:56.680402 containerd[1462]: 2025-01-30 13:41:56.678 [INFO][5138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8" Jan 30 13:41:56.680858 containerd[1462]: time="2025-01-30T13:41:56.680431370Z" level=info msg="TearDown network for sandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" successfully" Jan 30 13:41:56.766599 containerd[1462]: time="2025-01-30T13:41:56.766540463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:56.766763 containerd[1462]: time="2025-01-30T13:41:56.766632807Z" level=info msg="RemovePodSandbox \"734b6259f2f6b12cb154173b435528f14c55e9e89413ef2146b3e5cfd54a42a8\" returns successfully" Jan 30 13:41:56.767301 containerd[1462]: time="2025-01-30T13:41:56.767262057Z" level=info msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.796 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9", Pod:"calico-apiserver-7b5f976dbf-r7hdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44decc22375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.796 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.796 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" iface="eth0" netns="" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.796 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.796 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.814 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.814 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.814 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.819 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.819 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.820 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:56.824341 containerd[1462]: 2025-01-30 13:41:56.822 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.824341 containerd[1462]: time="2025-01-30T13:41:56.824314985Z" level=info msg="TearDown network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" successfully" Jan 30 13:41:56.824783 containerd[1462]: time="2025-01-30T13:41:56.824344740Z" level=info msg="StopPodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" returns successfully" Jan 30 13:41:56.824946 containerd[1462]: time="2025-01-30T13:41:56.824916913Z" level=info msg="RemovePodSandbox for \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" Jan 30 13:41:56.824990 containerd[1462]: time="2025-01-30T13:41:56.824951278Z" level=info msg="Forcibly stopping sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\"" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.853 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d49e37-12b3-413c-8b6b-5cfccd4b4b80", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"508c6edcaf39c9f0bb22898df6a3aee2bf2baadea84eb368a3902f39d59433a9", Pod:"calico-apiserver-7b5f976dbf-r7hdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44decc22375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.854 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.854 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" iface="eth0" netns="" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.854 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.854 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.873 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.873 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.873 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.877 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.877 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" HandleID="k8s-pod-network.ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--r7hdj-eth0" Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.878 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:56.882705 containerd[1462]: 2025-01-30 13:41:56.880 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50" Jan 30 13:41:56.883209 containerd[1462]: time="2025-01-30T13:41:56.882747388Z" level=info msg="TearDown network for sandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" successfully" Jan 30 13:41:56.895612 containerd[1462]: time="2025-01-30T13:41:56.895572287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:56.895726 containerd[1462]: time="2025-01-30T13:41:56.895631227Z" level=info msg="RemovePodSandbox \"ca57b0832bf3d88f3b66d485ab4a998bb961a1408cfef7138da4e91ca1bcdb50\" returns successfully" Jan 30 13:41:56.896066 containerd[1462]: time="2025-01-30T13:41:56.896044153Z" level=info msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.926 [WARNING][5230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0", GenerateName:"calico-kube-controllers-58bbf48d84-", Namespace:"calico-system", SelfLink:"", UID:"4f7442c3-8bdd-40c7-a454-8cfac24075e7", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bbf48d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f", Pod:"calico-kube-controllers-58bbf48d84-qbktp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7aead9e4f66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.926 [INFO][5230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.926 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" iface="eth0" netns="" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.926 [INFO][5230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.926 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.945 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.945 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.945 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.949 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.949 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.950 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:56.954851 containerd[1462]: 2025-01-30 13:41:56.952 [INFO][5230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:56.955362 containerd[1462]: time="2025-01-30T13:41:56.954889612Z" level=info msg="TearDown network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" successfully" Jan 30 13:41:56.955362 containerd[1462]: time="2025-01-30T13:41:56.954920359Z" level=info msg="StopPodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" returns successfully" Jan 30 13:41:56.955425 containerd[1462]: time="2025-01-30T13:41:56.955389629Z" level=info msg="RemovePodSandbox for \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" Jan 30 13:41:56.955425 containerd[1462]: time="2025-01-30T13:41:56.955413334Z" level=info msg="Forcibly stopping sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\"" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:56.986 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0", GenerateName:"calico-kube-controllers-58bbf48d84-", Namespace:"calico-system", SelfLink:"", UID:"4f7442c3-8bdd-40c7-a454-8cfac24075e7", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bbf48d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd19d12ae057f2f721c8855e976f680387c34fc99d2b57f1ab22111f5adf999f", Pod:"calico-kube-controllers-58bbf48d84-qbktp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7aead9e4f66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:56.987 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:56.987 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" iface="eth0" netns="" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:56.987 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:56.987 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.006 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.006 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.006 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.011 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.011 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" HandleID="k8s-pod-network.57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Workload="localhost-k8s-calico--kube--controllers--58bbf48d84--qbktp-eth0" Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.012 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.016774 containerd[1462]: 2025-01-30 13:41:57.014 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7" Jan 30 13:41:57.017186 containerd[1462]: time="2025-01-30T13:41:57.016814136Z" level=info msg="TearDown network for sandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" successfully" Jan 30 13:41:57.061932 containerd[1462]: time="2025-01-30T13:41:57.061884963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:57.062052 containerd[1462]: time="2025-01-30T13:41:57.061949504Z" level=info msg="RemovePodSandbox \"57fba341939ca6f9a0ec0191bdca2ac147cf22963bd1945243d2b83d03a2f7e7\" returns successfully" Jan 30 13:41:57.062370 containerd[1462]: time="2025-01-30T13:41:57.062349614Z" level=info msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.094 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc98b6c-2623-4585-9bb5-117c79a9fe02", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa", Pod:"coredns-668d6bf9bc-6wdhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif13308af907", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.094 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.094 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" iface="eth0" netns="" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.094 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.094 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.113 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.113 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.113 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.117 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.117 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.118 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.122706 containerd[1462]: 2025-01-30 13:41:57.120 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.122706 containerd[1462]: time="2025-01-30T13:41:57.122682250Z" level=info msg="TearDown network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" successfully" Jan 30 13:41:57.123253 containerd[1462]: time="2025-01-30T13:41:57.122712877Z" level=info msg="StopPodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" returns successfully" Jan 30 13:41:57.123253 containerd[1462]: time="2025-01-30T13:41:57.123223475Z" level=info msg="RemovePodSandbox for \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" Jan 30 13:41:57.123253 containerd[1462]: time="2025-01-30T13:41:57.123249685Z" level=info msg="Forcibly stopping sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\"" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.155 [WARNING][5319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc98b6c-2623-4585-9bb5-117c79a9fe02", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432e09a46f96d1f2eec2ab9daa8ff865893d5a0e45bb24a77805f1048dccf1fa", Pod:"coredns-668d6bf9bc-6wdhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif13308af907", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.156 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.156 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" iface="eth0" netns="" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.156 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.156 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.174 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.174 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.174 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.178 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.178 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" HandleID="k8s-pod-network.2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Workload="localhost-k8s-coredns--668d6bf9bc--6wdhf-eth0" Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.179 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.183645 containerd[1462]: 2025-01-30 13:41:57.181 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed" Jan 30 13:41:57.184208 containerd[1462]: time="2025-01-30T13:41:57.183701534Z" level=info msg="TearDown network for sandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" successfully" Jan 30 13:41:57.187851 containerd[1462]: time="2025-01-30T13:41:57.187812865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:57.187937 containerd[1462]: time="2025-01-30T13:41:57.187907784Z" level=info msg="RemovePodSandbox \"2d80ead4b6ce25361f3e3dba3e26b19b0c64a31f15e9264a5489545b4c2839ed\" returns successfully" Jan 30 13:41:57.188394 containerd[1462]: time="2025-01-30T13:41:57.188374289Z" level=info msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.216 [WARNING][5349] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9g6zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f7c933-d0e1-4d42-a085-53875d9b091a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2", Pod:"csi-node-driver-9g6zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3377850fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.217 [INFO][5349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.217 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" iface="eth0" netns="" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.217 [INFO][5349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.217 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.235 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.235 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.235 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.239 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.239 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.240 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.244836 containerd[1462]: 2025-01-30 13:41:57.242 [INFO][5349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.245543 containerd[1462]: time="2025-01-30T13:41:57.244871640Z" level=info msg="TearDown network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" successfully" Jan 30 13:41:57.245543 containerd[1462]: time="2025-01-30T13:41:57.244896497Z" level=info msg="StopPodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" returns successfully" Jan 30 13:41:57.245543 containerd[1462]: time="2025-01-30T13:41:57.245383350Z" level=info msg="RemovePodSandbox for \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" Jan 30 13:41:57.245543 containerd[1462]: time="2025-01-30T13:41:57.245404800Z" level=info msg="Forcibly stopping sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\"" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.278 [WARNING][5378] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9g6zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f7c933-d0e1-4d42-a085-53875d9b091a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa0843ad5946ace0f5f2590c41e8b84cc5d117fdb9fa9cb7b5a1a3ee317144f2", Pod:"csi-node-driver-9g6zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3377850fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.278 [INFO][5378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.278 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" iface="eth0" netns="" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.278 [INFO][5378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.278 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.298 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.298 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.298 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.303 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.303 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" HandleID="k8s-pod-network.5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Workload="localhost-k8s-csi--node--driver--9g6zr-eth0" Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.304 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.308907 containerd[1462]: 2025-01-30 13:41:57.306 [INFO][5378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf" Jan 30 13:41:57.309330 containerd[1462]: time="2025-01-30T13:41:57.308946285Z" level=info msg="TearDown network for sandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" successfully" Jan 30 13:41:57.313621 containerd[1462]: time="2025-01-30T13:41:57.313570590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:57.313689 containerd[1462]: time="2025-01-30T13:41:57.313645500Z" level=info msg="RemovePodSandbox \"5e7de3825f5703d7a344c5b4417d718423583091e24b9d35f7db74c48f9141bf\" returns successfully" Jan 30 13:41:57.314093 containerd[1462]: time="2025-01-30T13:41:57.314070307Z" level=info msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.349 [WARNING][5407] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e56822bd-9fb7-4fe0-827c-0d6527cef94c", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9", Pod:"calico-apiserver-7b5f976dbf-5c8cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40295aa86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.349 [INFO][5407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.349 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" iface="eth0" netns="" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.349 [INFO][5407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.349 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.369 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.369 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.369 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.374 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.374 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.375 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.380185 containerd[1462]: 2025-01-30 13:41:57.377 [INFO][5407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.380989 containerd[1462]: time="2025-01-30T13:41:57.380172014Z" level=info msg="TearDown network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" successfully" Jan 30 13:41:57.380989 containerd[1462]: time="2025-01-30T13:41:57.380201962Z" level=info msg="StopPodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" returns successfully" Jan 30 13:41:57.380989 containerd[1462]: time="2025-01-30T13:41:57.380882638Z" level=info msg="RemovePodSandbox for \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" Jan 30 13:41:57.380989 containerd[1462]: time="2025-01-30T13:41:57.380953230Z" level=info msg="Forcibly stopping sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\"" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.413 [WARNING][5437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0", GenerateName:"calico-apiserver-7b5f976dbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e56822bd-9fb7-4fe0-827c-0d6527cef94c", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f976dbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11b3472a147cd04152b119d35278720de3584ed08336fe978111ca2c62d318d9", Pod:"calico-apiserver-7b5f976dbf-5c8cv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40295aa86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.414 [INFO][5437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.414 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" iface="eth0" netns="" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.414 [INFO][5437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.414 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.433 [INFO][5444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.433 [INFO][5444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.433 [INFO][5444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.439 [WARNING][5444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.439 [INFO][5444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" HandleID="k8s-pod-network.7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Workload="localhost-k8s-calico--apiserver--7b5f976dbf--5c8cv-eth0" Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.441 [INFO][5444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:41:57.445361 containerd[1462]: 2025-01-30 13:41:57.443 [INFO][5437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263" Jan 30 13:41:57.445786 containerd[1462]: time="2025-01-30T13:41:57.445395205Z" level=info msg="TearDown network for sandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" successfully" Jan 30 13:41:57.449308 containerd[1462]: time="2025-01-30T13:41:57.449277788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:41:57.449363 containerd[1462]: time="2025-01-30T13:41:57.449322091Z" level=info msg="RemovePodSandbox \"7276e10c8346d2c48146772bd823f51159614da8c19528cab89caea1b56cc263\" returns successfully" Jan 30 13:41:59.622662 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:38744.service - OpenSSH per-connection server daemon (10.0.0.1:38744). Jan 30 13:41:59.665383 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 38744 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:59.666774 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:59.670429 systemd-logind[1449]: New session 16 of user core. Jan 30 13:41:59.679635 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:41:59.787666 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:59.799489 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:38744.service: Deactivated successfully. Jan 30 13:41:59.802037 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:41:59.804192 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:41:59.810826 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:38758.service - OpenSSH per-connection server daemon (10.0.0.1:38758). Jan 30 13:41:59.811823 systemd-logind[1449]: Removed session 16. Jan 30 13:41:59.840947 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 38758 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:41:59.842448 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:41:59.846274 systemd-logind[1449]: New session 17 of user core. Jan 30 13:41:59.854629 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:42:00.168998 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:00.181418 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:38758.service: Deactivated successfully. Jan 30 13:42:00.183091 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:42:00.184809 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:42:00.186062 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:38772.service - OpenSSH per-connection server daemon (10.0.0.1:38772). Jan 30 13:42:00.186806 systemd-logind[1449]: Removed session 17. Jan 30 13:42:00.229759 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 38772 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:00.230880 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:00.236095 systemd-logind[1449]: New session 18 of user core. Jan 30 13:42:00.240650 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:42:01.117699 sshd[5481]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:01.125946 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:38772.service: Deactivated successfully. Jan 30 13:42:01.128090 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:42:01.129062 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:42:01.141893 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Jan 30 13:42:01.143747 systemd-logind[1449]: Removed session 18. Jan 30 13:42:01.173329 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:01.175022 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:01.179611 systemd-logind[1449]: New session 19 of user core. Jan 30 13:42:01.184694 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:42:01.394874 sshd[5499]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:01.402627 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:46874.service: Deactivated successfully. Jan 30 13:42:01.405081 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:42:01.408024 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:42:01.414818 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:46890.service - OpenSSH per-connection server daemon (10.0.0.1:46890). Jan 30 13:42:01.415824 systemd-logind[1449]: Removed session 19. Jan 30 13:42:01.446232 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 46890 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:01.448139 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:01.452198 systemd-logind[1449]: New session 20 of user core. Jan 30 13:42:01.462648 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:42:01.584841 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:01.589375 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:46890.service: Deactivated successfully. Jan 30 13:42:01.591498 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:42:01.592205 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:42:01.593029 systemd-logind[1449]: Removed session 20. Jan 30 13:42:05.518804 kubelet[2501]: I0130 13:42:05.518765 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:42:06.597747 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:46892.service - OpenSSH per-connection server daemon (10.0.0.1:46892). Jan 30 13:42:06.633465 sshd[5534]: Accepted publickey for core from 10.0.0.1 port 46892 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:06.635110 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:06.639857 systemd-logind[1449]: New session 21 of user core. Jan 30 13:42:06.649655 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:42:06.767426 sshd[5534]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:06.771388 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:46892.service: Deactivated successfully. Jan 30 13:42:06.773361 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:42:06.774177 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:42:06.775122 systemd-logind[1449]: Removed session 21. Jan 30 13:42:07.558646 kubelet[2501]: E0130 13:42:07.558607 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:11.778296 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:55456.service - OpenSSH per-connection server daemon (10.0.0.1:55456). Jan 30 13:42:11.825861 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 55456 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:11.827561 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:11.831548 systemd-logind[1449]: New session 22 of user core. Jan 30 13:42:11.836828 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:42:11.951672 sshd[5570]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:11.955249 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:55456.service: Deactivated successfully. Jan 30 13:42:11.957070 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:42:11.957625 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:42:11.958577 systemd-logind[1449]: Removed session 22. Jan 30 13:42:16.965864 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:55462.service - OpenSSH per-connection server daemon (10.0.0.1:55462). Jan 30 13:42:17.005270 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 55462 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:42:17.007032 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:17.011307 systemd-logind[1449]: New session 23 of user core. Jan 30 13:42:17.020789 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:42:17.127295 sshd[5610]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:17.131973 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:55462.service: Deactivated successfully. Jan 30 13:42:17.134384 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:42:17.135247 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:42:17.137128 systemd-logind[1449]: Removed session 23.