Jan 13 21:21:38.579794 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:21:38.579827 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:38.579844 kernel: BIOS-provided physical RAM map: Jan 13 21:21:38.579966 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 21:21:38.579979 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 21:21:38.579989 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 21:21:38.580000 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 21:21:38.580009 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 21:21:38.580019 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 13 21:21:38.580028 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 13 21:21:38.580044 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 13 21:21:38.580054 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 13 21:21:38.580064 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 13 21:21:38.580074 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 13 21:21:38.580086 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 13 21:21:38.580097 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 21:21:38.580111 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 13 21:21:38.580121 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 13 21:21:38.580132 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 21:21:38.580142 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:21:38.580152 kernel: NX (Execute Disable) protection: active Jan 13 21:21:38.580162 kernel: APIC: Static calls initialized Jan 13 21:21:38.580172 kernel: efi: EFI v2.7 by EDK II Jan 13 21:21:38.580183 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 13 21:21:38.580193 kernel: SMBIOS 2.8 present. Jan 13 21:21:38.580204 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 13 21:21:38.580214 kernel: Hypervisor detected: KVM Jan 13 21:21:38.580228 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:21:38.580238 kernel: kvm-clock: using sched offset of 4747314673 cycles Jan 13 21:21:38.580249 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:21:38.580260 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:21:38.580271 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:21:38.580356 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:21:38.580371 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 13 21:21:38.580382 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 21:21:38.580392 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:21:38.580408 kernel: Using GB pages for direct mapping Jan 13 21:21:38.580418 kernel: Secure boot disabled Jan 13 21:21:38.580428 kernel: ACPI: Early table checksum verification disabled Jan 13 21:21:38.580437 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 21:21:38.580452 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:21:38.580462 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580473 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580485 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 21:21:38.580495 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580505 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580516 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580526 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:21:38.580536 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:21:38.580546 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 21:21:38.580559 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 21:21:38.580569 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 21:21:38.580579 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 21:21:38.580590 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 21:21:38.580601 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 21:21:38.580611 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 21:21:38.580622 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 21:21:38.580633 kernel: No NUMA configuration found Jan 13 21:21:38.580644 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 13 21:21:38.580659 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 13 21:21:38.580671 kernel: Zone ranges: Jan 13 21:21:38.580682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:21:38.580694 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 13 21:21:38.580705 kernel: Normal empty Jan 13 21:21:38.580716 kernel: Movable zone start for each node Jan 13 21:21:38.580727 kernel: Early memory node ranges Jan 13 21:21:38.580737 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 21:21:38.580748 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 21:21:38.580759 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 21:21:38.580773 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 13 21:21:38.580785 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 13 21:21:38.580796 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 13 21:21:38.580808 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 13 21:21:38.580818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:38.580829 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 21:21:38.580840 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 21:21:38.580860 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:38.580875 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 13 21:21:38.580891 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 13 21:21:38.580901 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 13 21:21:38.580910 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:21:38.580919 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:21:38.580930 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:21:38.580939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:21:38.580948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:21:38.580957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:21:38.580966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:21:38.580979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:21:38.580988 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:21:38.580998 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:21:38.581007 kernel: TSC deadline timer available Jan 13 21:21:38.581016 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:21:38.581025 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:21:38.581035 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:21:38.581047 kernel: kvm-guest: setup PV sched yield Jan 13 21:21:38.581061 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:21:38.581076 kernel: Booting paravirtualized kernel on KVM Jan 13 21:21:38.581087 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:21:38.581096 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:21:38.581105 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:21:38.581115 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:21:38.581124 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:21:38.581133 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:21:38.581143 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:21:38.581154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:38.581176 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:21:38.581192 kernel: random: crng init done Jan 13 21:21:38.581202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:21:38.581212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:21:38.581222 kernel: Fallback order for Node 0: 0 Jan 13 21:21:38.581233 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 13 21:21:38.581243 kernel: Policy zone: DMA32 Jan 13 21:21:38.581254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:21:38.581266 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 13 21:21:38.581355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:21:38.581370 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:21:38.581383 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:21:38.581397 kernel: Dynamic Preempt: voluntary Jan 13 21:21:38.581426 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:21:38.581446 kernel: rcu: RCU event tracing is enabled. Jan 13 21:21:38.581461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:21:38.581476 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:21:38.581490 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:21:38.581504 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:21:38.581519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:21:38.581537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:21:38.581552 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:21:38.581567 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:21:38.581581 kernel: Console: colour dummy device 80x25 Jan 13 21:21:38.581595 kernel: printk: console [ttyS0] enabled Jan 13 21:21:38.581610 kernel: ACPI: Core revision 20230628 Jan 13 21:21:38.581622 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:21:38.581633 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:21:38.581645 kernel: x2apic enabled Jan 13 21:21:38.581657 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:21:38.581669 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:21:38.581681 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:21:38.581692 kernel: kvm-guest: setup PV IPIs Jan 13 21:21:38.581704 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:21:38.581719 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:21:38.581731 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:21:38.581743 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:21:38.581755 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:21:38.581766 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:21:38.581778 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:21:38.581790 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:21:38.581801 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:21:38.581813 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:21:38.581828 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:21:38.581839 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:21:38.581862 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:21:38.581874 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:21:38.581885 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:21:38.581898 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:21:38.581910 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:21:38.581922 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:21:38.581938 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:21:38.581949 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:21:38.581961 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:21:38.581972 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:21:38.581984 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:21:38.581995 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:21:38.582007 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:21:38.582019 kernel: landlock: Up and running. Jan 13 21:21:38.582031 kernel: SELinux: Initializing. Jan 13 21:21:38.582043 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:21:38.582058 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:21:38.582069 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:21:38.582081 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:21:38.582093 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:21:38.582104 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:21:38.582116 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:21:38.582127 kernel: ... version: 0 Jan 13 21:21:38.582138 kernel: ... bit width: 48 Jan 13 21:21:38.582152 kernel: ... generic registers: 6 Jan 13 21:21:38.582164 kernel: ... value mask: 0000ffffffffffff Jan 13 21:21:38.582175 kernel: ... max period: 00007fffffffffff Jan 13 21:21:38.582186 kernel: ... fixed-purpose events: 0 Jan 13 21:21:38.582196 kernel: ... event mask: 000000000000003f Jan 13 21:21:38.582205 kernel: signal: max sigframe size: 1776 Jan 13 21:21:38.582215 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:21:38.582225 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:21:38.582235 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:21:38.582249 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:21:38.582259 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:21:38.582269 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:21:38.592944 kernel: smpboot: Max logical packages: 1 Jan 13 21:21:38.592995 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:21:38.593007 kernel: devtmpfs: initialized Jan 13 21:21:38.593019 kernel: x86/mm: Memory block size: 128MB Jan 13 21:21:38.593030 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 21:21:38.593040 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 21:21:38.593051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 13 21:21:38.593087 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 21:21:38.593098 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 21:21:38.593109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:21:38.593120 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:21:38.593131 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:21:38.593144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:21:38.593156 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:21:38.593168 kernel: audit: type=2000 audit(1736803296.213:1): state=initialized audit_enabled=0 res=1 Jan 13 21:21:38.593185 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:21:38.593197 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:21:38.593209 kernel: cpuidle: using governor menu Jan 13 21:21:38.593221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:21:38.593233 kernel: dca service started, version 1.12.1 Jan 13 21:21:38.593245 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:21:38.593258 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:21:38.593270 kernel: PCI: Using configuration type 1 for base access Jan 13 21:21:38.593298 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:21:38.593314 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:21:38.593326 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:21:38.593338 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:21:38.593351 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:21:38.593363 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:21:38.593375 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:21:38.593388 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:21:38.593400 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:21:38.593412 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:21:38.593429 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:21:38.593440 kernel: ACPI: Interpreter enabled Jan 13 21:21:38.593452 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:21:38.593465 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:21:38.593477 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:21:38.593489 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:21:38.593499 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:21:38.593510 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:21:38.593826 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:21:38.594024 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:21:38.594190 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:21:38.594207 kernel: PCI host bridge to bus 0000:00 Jan 13 21:21:38.594392 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:21:38.594538 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:21:38.594681 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:21:38.594829 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:21:38.595007 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:21:38.595157 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 13 21:21:38.595320 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:21:38.595564 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:21:38.595755 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:21:38.595950 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 21:21:38.596145 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 21:21:38.596328 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 21:21:38.596510 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 21:21:38.596679 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:21:38.596871 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:21:38.597038 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 21:21:38.597208 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 21:21:38.597394 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 13 21:21:38.597569 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:21:38.597736 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 21:21:38.604173 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 21:21:38.604413 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 13 21:21:38.604647 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:21:38.604838 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 21:21:38.605025 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 21:21:38.605202 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 13 21:21:38.605397 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 21:21:38.605577 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:21:38.605749 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:21:38.605947 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:21:38.606122 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 21:21:38.606281 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 21:21:38.606473 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:21:38.606633 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 21:21:38.606650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:21:38.606661 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:21:38.606672 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:21:38.606683 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:21:38.606702 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:21:38.606713 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:21:38.606724 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:21:38.606735 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:21:38.606746 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:21:38.606757 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:21:38.606768 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:21:38.606780 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:21:38.606791 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:21:38.606806 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:21:38.606818 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:21:38.606830 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:21:38.606842 kernel: iommu: Default domain type: Translated Jan 13 21:21:38.606866 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:21:38.606877 kernel: efivars: Registered efivars operations Jan 13 21:21:38.606887 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:21:38.606897 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:21:38.606909 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 21:21:38.606924 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 13 21:21:38.606936 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 13 21:21:38.606947 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 13 21:21:38.607123 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:21:38.607305 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:21:38.607474 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:21:38.607491 kernel: vgaarb: loaded Jan 13 21:21:38.607504 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:21:38.607521 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:21:38.607534 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:21:38.607545 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:21:38.607558 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:21:38.607569 kernel: pnp: PnP ACPI init Jan 13 21:21:38.607747 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:21:38.607766 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:21:38.607779 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:21:38.607790 kernel: NET: Registered PF_INET protocol family Jan 13 21:21:38.607807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:21:38.607818 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:21:38.607828 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:21:38.607840 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:21:38.617030 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:21:38.617069 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:21:38.617079 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:21:38.617090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:21:38.617113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:21:38.617123 kernel: NET: Registered PF_XDP protocol family Jan 13 21:21:38.617357 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 21:21:38.617498 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 21:21:38.617629 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:21:38.617879 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:21:38.617999 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:21:38.618120 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:21:38.618247 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:21:38.618383 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 13 21:21:38.618397 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:21:38.618407 kernel: Initialise system trusted keyrings Jan 13 21:21:38.618417 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:21:38.618426 kernel: Key type asymmetric registered Jan 13 21:21:38.618436 kernel: Asymmetric key parser 'x509' registered Jan 13 21:21:38.618445 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:21:38.618455 kernel: io scheduler mq-deadline registered Jan 13 21:21:38.618467 kernel: io scheduler kyber registered Jan 13 21:21:38.618477 kernel: io scheduler bfq registered Jan 13 21:21:38.618487 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:21:38.618497 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:21:38.618506 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:21:38.618516 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:21:38.618525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:21:38.618535 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:21:38.618545 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:21:38.618557 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:21:38.618567 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:21:38.618576 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:21:38.618719 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:21:38.618845 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:21:38.618982 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:21:37 UTC (1736803297) Jan 13 21:21:38.619104 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:21:38.619116 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:21:38.619130 kernel: efifb: probing for efifb Jan 13 21:21:38.619140 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 13 21:21:38.619150 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 13 21:21:38.619159 kernel: efifb: scrolling: redraw Jan 13 21:21:38.619169 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 13 21:21:38.619179 kernel: Console: switching to colour frame buffer device 100x37 Jan 13 21:21:38.619210 kernel: fb0: EFI VGA frame buffer device Jan 13 21:21:38.619223 kernel: pstore: Using crash dump compression: deflate Jan 13 21:21:38.619233 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:21:38.619248 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:21:38.619258 kernel: Segment Routing with IPv6 Jan 13 21:21:38.619268 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:21:38.619278 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:21:38.619299 kernel: Key type dns_resolver registered Jan 13 21:21:38.619309 kernel: IPI shorthand broadcast: enabled Jan 13 21:21:38.619319 kernel: sched_clock: Marking stable (1111627737, 191756268)->(1580760941, -277376936) Jan 13 21:21:38.619329 kernel: registered taskstats version 1 Jan 13 21:21:38.619339 kernel: Loading compiled-in X.509 certificates Jan 13 21:21:38.619352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:21:38.619362 kernel: Key type .fscrypt registered Jan 13 21:21:38.619371 kernel: Key type fscrypt-provisioning registered Jan 13 21:21:38.619381 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:21:38.619391 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:21:38.619401 kernel: ima: No architecture policies found Jan 13 21:21:38.619411 kernel: clk: Disabling unused clocks Jan 13 21:21:38.619421 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:21:38.619430 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:21:38.619443 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:21:38.619453 kernel: Run /init as init process Jan 13 21:21:38.619463 kernel: with arguments: Jan 13 21:21:38.619474 kernel: /init Jan 13 21:21:38.619484 kernel: with environment: Jan 13 21:21:38.619495 kernel: HOME=/ Jan 13 21:21:38.619505 kernel: TERM=linux Jan 13 21:21:38.619516 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:21:38.619531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:38.619549 systemd[1]: Detected virtualization kvm. Jan 13 21:21:38.619560 systemd[1]: Detected architecture x86-64. Jan 13 21:21:38.619570 systemd[1]: Running in initrd. Jan 13 21:21:38.619583 systemd[1]: No hostname configured, using default hostname. Jan 13 21:21:38.619596 systemd[1]: Hostname set to . Jan 13 21:21:38.619607 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:21:38.619617 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:21:38.619628 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:38.619639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:38.619650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:21:38.619661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:38.619672 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:21:38.619686 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:21:38.619699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:21:38.619710 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:21:38.619721 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:38.619731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:38.619742 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:38.619755 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:38.619765 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:38.619776 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:38.619786 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:38.619797 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:38.619808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:21:38.619818 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:21:38.619829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:38.619840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:38.621743 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:38.621763 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:38.621774 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:21:38.621785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:38.621795 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:21:38.621806 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:21:38.621816 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:38.621827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:38.621884 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:21:38.621917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:38.621928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:38.621939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:38.621951 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:21:38.621963 systemd-journald[192]: Journal started Jan 13 21:21:38.621986 systemd-journald[192]: Runtime Journal (/run/log/journal/b73c63923c3d4a93a198b430f3aaa596) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:21:38.632699 systemd-modules-load[193]: Inserted module 'overlay' Jan 13 21:21:38.643074 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:38.646344 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:38.648537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:38.651958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:38.668886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:38.677132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:38.685813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:38.713085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:38.722350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:38.739733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:38.773438 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:21:38.765626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:21:38.779543 kernel: Bridge firewalling registered Jan 13 21:21:38.782953 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 13 21:21:38.787947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:38.791250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:38.799467 dracut-cmdline[222]: dracut-dracut-053 Jan 13 21:21:38.803802 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:38.811171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:38.850712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:38.898619 systemd-resolved[247]: Positive Trust Anchors: Jan 13 21:21:38.899526 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:38.900311 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:38.903890 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 13 21:21:38.915131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:38.920081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:39.042365 kernel: SCSI subsystem initialized Jan 13 21:21:39.059653 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:21:39.091776 kernel: iscsi: registered transport (tcp) Jan 13 21:21:39.126672 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:21:39.126765 kernel: QLogic iSCSI HBA Driver Jan 13 21:21:39.266025 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:39.286394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:21:39.338537 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:21:39.338626 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:21:39.338646 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:21:39.426530 kernel: raid6: avx2x4 gen() 19403 MB/s Jan 13 21:21:39.443394 kernel: raid6: avx2x2 gen() 19236 MB/s Jan 13 21:21:39.460996 kernel: raid6: avx2x1 gen() 17692 MB/s Jan 13 21:21:39.461076 kernel: raid6: using algorithm avx2x4 gen() 19403 MB/s Jan 13 21:21:39.478802 kernel: raid6: .... xor() 5144 MB/s, rmw enabled Jan 13 21:21:39.478902 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:21:39.523092 kernel: xor: automatically using best checksumming function avx Jan 13 21:21:39.878435 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:21:39.909635 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:39.927620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:39.952422 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 13 21:21:39.966632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:39.987055 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:21:40.035340 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 13 21:21:40.151057 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:40.172650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:40.302893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:40.340458 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:21:40.374129 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:40.382963 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:40.390017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:40.394392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:40.411522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:21:40.456622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:40.456802 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:40.465386 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:40.473135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:40.473320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:40.480519 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:40.509333 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:21:40.527625 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:21:40.527864 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:21:40.527882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:21:40.527898 kernel: GPT:9289727 != 19775487 Jan 13 21:21:40.527914 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:21:40.527941 kernel: GPT:9289727 != 19775487 Jan 13 21:21:40.527956 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:21:40.527971 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:21:40.540891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:40.548342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:40.553331 kernel: libata version 3.00 loaded. Jan 13 21:21:40.579934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:40.620058 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:21:40.678680 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:21:40.678711 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:21:40.678727 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:21:40.679715 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:21:40.679923 kernel: scsi host0: ahci Jan 13 21:21:40.680135 kernel: AES CTR mode by8 optimization enabled Jan 13 21:21:40.680154 kernel: scsi host1: ahci Jan 13 21:21:40.680460 kernel: scsi host2: ahci Jan 13 21:21:40.680673 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (469) Jan 13 21:21:40.680692 kernel: scsi host3: ahci Jan 13 21:21:40.680988 kernel: scsi host4: ahci Jan 13 21:21:40.681238 kernel: scsi host5: ahci Jan 13 21:21:40.681484 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 21:21:40.681505 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 21:21:40.681520 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 21:21:40.681535 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 21:21:40.681549 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 21:21:40.681564 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 21:21:40.624237 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:40.687350 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 13 21:21:40.675490 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:21:40.680734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:40.698327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:21:40.719657 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:21:40.721698 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:21:40.737899 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:21:40.752683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:21:40.765217 disk-uuid[566]: Primary Header is updated. Jan 13 21:21:40.765217 disk-uuid[566]: Secondary Entries is updated. Jan 13 21:21:40.765217 disk-uuid[566]: Secondary Header is updated. Jan 13 21:21:40.772332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:21:40.781330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:21:40.788311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:21:40.987320 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:21:40.991612 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:21:40.991660 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:21:40.991677 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:21:40.991691 kernel: ata3.00: applying bridge limits Jan 13 21:21:40.991705 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:21:40.991731 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:21:40.993317 kernel: ata3.00: configured for UDMA/100 Jan 13 21:21:40.993347 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:21:40.994315 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:21:41.047322 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:21:41.060344 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:21:41.060367 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:21:41.795342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:21:41.800343 disk-uuid[567]: The operation has completed successfully. Jan 13 21:21:41.898111 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:21:41.899972 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:21:41.932403 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:21:41.946323 sh[597]: Success Jan 13 21:21:41.981528 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:21:42.067396 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:21:42.070900 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:21:42.074326 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:21:42.142691 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:21:42.142789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:42.142807 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:21:42.143951 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:21:42.145237 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:21:42.175758 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:21:42.179845 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:21:42.195758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:21:42.202027 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:21:42.216194 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:42.216304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:42.216325 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:21:42.225860 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:21:42.241380 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:21:42.243298 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:42.269791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:21:42.281532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:21:42.357183 ignition[693]: Ignition 2.19.0 Jan 13 21:21:42.357206 ignition[693]: Stage: fetch-offline Jan 13 21:21:42.357262 ignition[693]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:42.357277 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:42.357436 ignition[693]: parsed url from cmdline: "" Jan 13 21:21:42.357441 ignition[693]: no config URL provided Jan 13 21:21:42.357453 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:42.357466 ignition[693]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:42.357506 ignition[693]: op(1): [started] loading QEMU firmware config module Jan 13 21:21:42.357513 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:21:42.375154 ignition[693]: op(1): [finished] loading QEMU firmware config module Jan 13 21:21:42.410540 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:42.423570 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:42.433782 ignition[693]: parsing config with SHA512: 2e7399013cab93756c2a77882576918c7dea1673e5133e515ec40393a16c5aa72c5162c6e087321c7a04ba1f7c3c3be6e2bea2d90ed0d4498193b3b2161b6b3e Jan 13 21:21:42.438731 unknown[693]: fetched base config from "system" Jan 13 21:21:42.438749 unknown[693]: fetched user config from "qemu" Jan 13 21:21:42.441740 ignition[693]: fetch-offline: fetch-offline passed Jan 13 21:21:42.442013 ignition[693]: Ignition finished successfully Jan 13 21:21:42.445351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:42.457861 systemd-networkd[785]: lo: Link UP Jan 13 21:21:42.457882 systemd-networkd[785]: lo: Gained carrier Jan 13 21:21:42.461445 systemd-networkd[785]: Enumeration completed Jan 13 21:21:42.462267 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:42.464595 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:42.464602 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:42.466922 systemd[1]: Reached target network.target - Network. Jan 13 21:21:42.469276 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:21:42.475534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:21:42.475572 systemd-networkd[785]: eth0: Link UP Jan 13 21:21:42.475578 systemd-networkd[785]: eth0: Gained carrier Jan 13 21:21:42.476604 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:42.491984 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:21:42.493074 ignition[788]: Ignition 2.19.0 Jan 13 21:21:42.493083 ignition[788]: Stage: kargs Jan 13 21:21:42.493320 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:42.493342 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:42.494504 ignition[788]: kargs: kargs passed Jan 13 21:21:42.494573 ignition[788]: Ignition finished successfully Jan 13 21:21:42.499750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:21:42.519627 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:21:42.535549 ignition[796]: Ignition 2.19.0 Jan 13 21:21:42.535562 ignition[796]: Stage: disks Jan 13 21:21:42.535734 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:42.535746 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:42.536602 ignition[796]: disks: disks passed Jan 13 21:21:42.536652 ignition[796]: Ignition finished successfully Jan 13 21:21:42.541792 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:21:42.544743 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:42.546969 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:21:42.549411 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:42.551542 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:42.553701 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:42.566710 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:21:42.578796 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:21:42.829871 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:21:42.845482 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:21:42.848392 systemd-resolved[247]: Detected conflict on linux IN A 10.0.0.60 Jan 13 21:21:42.848403 systemd-resolved[247]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jan 13 21:21:42.976317 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:21:42.976793 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:21:42.978272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:42.987384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:42.989214 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:21:42.990335 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:21:42.990374 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:21:42.998143 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 13 21:21:42.998165 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:42.990395 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:43.003245 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:43.003266 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:21:42.998803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:21:43.006061 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:21:43.004588 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:21:43.007185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:43.039053 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:21:43.043159 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:21:43.047224 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:21:43.051385 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:21:43.126451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:43.135434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:21:43.137071 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:21:43.142959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:21:43.162134 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:43.177490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:21:43.236223 ignition[934]: INFO : Ignition 2.19.0 Jan 13 21:21:43.236223 ignition[934]: INFO : Stage: mount Jan 13 21:21:43.237963 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:43.237963 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:43.237963 ignition[934]: INFO : mount: mount passed Jan 13 21:21:43.237963 ignition[934]: INFO : Ignition finished successfully Jan 13 21:21:43.274173 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:21:43.286382 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:21:43.293666 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:43.306306 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Jan 13 21:21:43.306339 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:43.308579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:43.308597 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:21:43.312312 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:21:43.313755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:43.342420 ignition[961]: INFO : Ignition 2.19.0 Jan 13 21:21:43.342420 ignition[961]: INFO : Stage: files Jan 13 21:21:43.356931 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:43.356931 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:43.356931 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:43.356931 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:43.356931 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:43.363346 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:43.363346 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:43.363346 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:43.363346 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:43.363346 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:43.358764 unknown[961]: wrote ssh authorized keys file for user: core Jan 13 21:21:43.416481 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.585563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.600648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:21:43.779599 systemd-networkd[785]: eth0: Gained IPv6LL Jan 13 21:21:44.071859 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:21:44.481575 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:44.481575 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:21:44.485624 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:21:44.508054 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:21:44.512348 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:21:44.513981 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:21:44.513981 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.513981 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.513981 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.513981 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.513981 ignition[961]: INFO : files: files passed Jan 13 21:21:44.513981 ignition[961]: INFO : Ignition finished successfully Jan 13 21:21:44.516433 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:21:44.527433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:21:44.529183 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:21:44.531420 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:21:44.531526 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:21:44.538106 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:21:44.541365 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.541365 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.545908 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.543245 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.546091 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.549547 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:21:44.578094 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:21:44.578231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:21:44.579505 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:21:44.581749 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:21:44.583684 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:21:44.595466 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:21:44.610652 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.618517 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:21:44.628178 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:44.628344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:44.628690 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:21:44.664493 ignition[1016]: INFO : Ignition 2.19.0 Jan 13 21:21:44.664493 ignition[1016]: INFO : Stage: umount Jan 13 21:21:44.664493 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:44.664493 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:44.664493 ignition[1016]: INFO : umount: umount passed Jan 13 21:21:44.664493 ignition[1016]: INFO : Ignition finished successfully Jan 13 21:21:44.629026 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:21:44.629164 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.630049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:21:44.630744 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:21:44.631054 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.631569 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:44.631905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:44.632253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:21:44.632582 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:44.632932 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:21:44.633304 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:21:44.633601 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:21:44.633909 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:21:44.634018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:44.634770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:44.635110 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:44.635573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:21:44.635672 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:44.635925 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:21:44.636029 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:44.636756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:21:44.636865 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:44.637345 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:21:44.637610 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:21:44.641354 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:44.641659 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:21:44.642003 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:21:44.642578 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:21:44.642676 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:44.643097 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:21:44.643184 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:44.643610 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:21:44.643733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.644145 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:21:44.644248 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:21:44.645474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:21:44.646404 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:21:44.646774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:21:44.646893 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:44.647218 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:21:44.647337 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:44.650743 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:21:44.650851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:21:44.663862 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:21:44.663978 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:21:44.665648 systemd[1]: Stopped target network.target - Network. Jan 13 21:21:44.667307 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:21:44.667361 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:21:44.669087 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:21:44.669144 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:21:44.671139 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:21:44.671188 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:21:44.673525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:21:44.673574 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:44.675599 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:21:44.677532 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:21:44.680823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:21:44.681332 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 13 21:21:44.683822 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:21:44.683988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:21:44.726960 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:21:44.727118 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:21:44.730784 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:21:44.730835 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:44.738375 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:21:44.740429 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:21:44.740487 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:44.742781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:21:44.742831 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:44.745252 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:21:44.745312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:44.747600 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:21:44.747663 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:44.750040 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:44.770111 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:21:44.770303 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:44.772000 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:21:44.772107 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:21:44.774178 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:21:44.774248 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:44.775900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:21:44.775939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:44.777914 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:21:44.777964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:44.780132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:21:44.780178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:44.782127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:44.782194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:44.793476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:21:44.795393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:21:44.795464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:44.797749 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:21:44.797800 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:44.800081 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:21:44.800136 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:44.800247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:44.800309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:44.801762 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:21:44.801875 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:21:45.096006 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:21:45.096168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:21:45.098332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:21:45.099968 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:21:45.100033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:45.117567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:21:45.124894 systemd[1]: Switching root. Jan 13 21:21:45.161387 systemd-journald[192]: Journal stopped Jan 13 21:21:46.665231 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:21:46.665333 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:21:46.665350 kernel: SELinux: policy capability open_perms=1 Jan 13 21:21:46.665361 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:21:46.665372 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:21:46.665388 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:21:46.665408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:21:46.665419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:21:46.665439 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:21:46.665450 kernel: audit: type=1403 audit(1736803305.796:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:21:46.665462 systemd[1]: Successfully loaded SELinux policy in 41.809ms. Jan 13 21:21:46.665476 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.031ms. Jan 13 21:21:46.665489 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:46.665502 systemd[1]: Detected virtualization kvm. Jan 13 21:21:46.665513 systemd[1]: Detected architecture x86-64. Jan 13 21:21:46.665528 systemd[1]: Detected first boot. Jan 13 21:21:46.665540 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:21:46.665552 zram_generator::config[1060]: No configuration found. Jan 13 21:21:46.665566 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:21:46.665579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:21:46.665592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:21:46.665604 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:21:46.665616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:21:46.665631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:21:46.665642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:21:46.665664 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:21:46.665677 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:21:46.665689 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:21:46.665701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:21:46.665712 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:21:46.665724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:46.665736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:46.665751 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:21:46.665763 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:21:46.665775 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:21:46.665787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:46.665798 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:21:46.665810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:46.665822 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:21:46.665835 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:21:46.665851 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:46.665863 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:21:46.665875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:46.665887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:46.665899 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:46.665911 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:46.665922 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:21:46.665935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:21:46.665949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:46.665961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:46.665972 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:46.665984 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:21:46.665996 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:21:46.666008 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:21:46.666020 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:21:46.666032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:46.666044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:21:46.666058 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:21:46.666069 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:21:46.666082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:21:46.666093 systemd[1]: Reached target machines.target - Containers. Jan 13 21:21:46.666106 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:21:46.666118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:46.666130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:46.666142 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:21:46.666154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:46.666168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:46.666180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:46.666191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:21:46.666203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:46.666215 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:21:46.666227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:21:46.666239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:21:46.666250 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:21:46.666264 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:21:46.666276 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:46.666305 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:46.666318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:21:46.666329 kernel: loop: module loaded Jan 13 21:21:46.666340 kernel: fuse: init (API version 7.39) Jan 13 21:21:46.666369 systemd-journald[1123]: Collecting audit messages is disabled. Jan 13 21:21:46.666390 kernel: ACPI: bus type drm_connector registered Jan 13 21:21:46.666407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:21:46.666419 systemd-journald[1123]: Journal started Jan 13 21:21:46.666440 systemd-journald[1123]: Runtime Journal (/run/log/journal/b73c63923c3d4a93a198b430f3aaa596) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:21:46.366000 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:21:46.388627 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:21:46.389172 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:21:46.686718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:46.686791 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:21:46.688972 systemd[1]: Stopped verity-setup.service. Jan 13 21:21:46.691309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:46.694857 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:46.695710 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:21:46.696951 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:21:46.698228 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:21:46.699456 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:21:46.701071 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:21:46.702472 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:21:46.703847 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:46.709127 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:21:46.709399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:21:46.722967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:46.723149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:46.724707 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:46.724882 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:46.726328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:46.726503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:46.728195 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:21:46.728380 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:21:46.729815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:46.729982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:46.731417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:46.733662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:21:46.735224 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:21:46.746709 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:21:46.754451 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:21:46.790482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:21:46.793144 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:21:46.794374 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:21:46.794407 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:46.796819 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:21:46.799413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:21:46.801799 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:21:46.803013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:46.804529 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:21:46.813554 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:21:46.815114 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:46.821436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:21:46.823083 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:46.823489 systemd-journald[1123]: Time spent on flushing to /var/log/journal/b73c63923c3d4a93a198b430f3aaa596 is 52.213ms for 992 entries. Jan 13 21:21:46.823489 systemd-journald[1123]: System Journal (/var/log/journal/b73c63923c3d4a93a198b430f3aaa596) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:21:47.166821 systemd-journald[1123]: Received client request to flush runtime journal. Jan 13 21:21:47.166869 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:21:47.166889 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:21:47.166906 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:21:46.825365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:46.835327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:21:46.838023 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:46.841191 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:46.843438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:21:46.846975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:21:46.849136 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:21:47.082203 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:21:47.120009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:21:47.121543 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:21:47.142056 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 13 21:21:47.142074 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 13 21:21:47.164684 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:21:47.166355 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:47.168213 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:47.170174 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:21:47.191710 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:21:47.193326 kernel: loop2: detected capacity change from 0 to 205544 Jan 13 21:21:47.193729 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:21:47.316642 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:21:47.326319 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:21:47.326682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:47.341317 kernel: loop4: detected capacity change from 0 to 142488 Jan 13 21:21:47.352486 kernel: loop5: detected capacity change from 0 to 205544 Jan 13 21:21:47.355747 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 13 21:21:47.355769 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 13 21:21:47.362559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:47.369163 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:21:47.369770 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 13 21:21:47.376149 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:21:47.376165 systemd[1]: Reloading... Jan 13 21:21:47.460337 zram_generator::config[1230]: No configuration found. Jan 13 21:21:47.599470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:47.661072 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:21:47.661459 systemd[1]: Reloading finished in 284 ms. Jan 13 21:21:47.661611 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:21:47.705712 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:21:47.707533 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:21:47.709578 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:21:47.723945 systemd[1]: Starting ensure-sysext.service... Jan 13 21:21:47.726260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:47.733907 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:21:47.733933 systemd[1]: Reloading... Jan 13 21:21:47.767653 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:21:47.768114 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:21:47.769348 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:21:47.769769 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 13 21:21:47.769869 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 13 21:21:47.778238 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:47.778254 systemd-tmpfiles[1266]: Skipping /boot Jan 13 21:21:47.796040 zram_generator::config[1292]: No configuration found. Jan 13 21:21:47.802904 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:47.803075 systemd-tmpfiles[1266]: Skipping /boot Jan 13 21:21:47.952699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:48.024972 systemd[1]: Reloading finished in 290 ms. Jan 13 21:21:48.060845 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:21:48.062573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:48.088762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:48.091803 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:21:48.094356 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:21:48.098728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:48.106535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:48.110429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:21:48.114123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.114669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:48.122994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:48.127873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:48.131855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:48.133048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:48.135729 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:21:48.136964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.138226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:21:48.140204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:48.140497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:48.144078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:48.144328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:48.151434 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jan 13 21:21:48.151921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:48.152153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:48.154414 augenrules[1359]: No rules Jan 13 21:21:48.157188 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:48.161939 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:21:48.167116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.167499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:48.178594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:48.184523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:48.186807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:48.191362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:48.193387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:48.194970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:21:48.196766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.197768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:48.199815 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:21:48.202351 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:21:48.204378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:48.204552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:48.206621 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:48.206944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:48.208900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:48.209068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:48.236392 systemd[1]: Finished ensure-sysext.service. Jan 13 21:21:48.238154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:48.238403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:48.266242 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:48.267540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:48.267626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:48.276470 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:21:48.277847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:21:48.278389 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:21:48.285716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:21:48.312315 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1390) Jan 13 21:21:48.342312 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:21:48.345864 systemd-resolved[1336]: Positive Trust Anchors: Jan 13 21:21:48.345883 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:48.345914 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:48.348309 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:21:48.349806 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 13 21:21:48.351545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:48.352828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:48.378357 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:21:48.378454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:21:48.388466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:21:48.393318 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 21:21:48.395189 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:21:48.395390 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:21:48.395571 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:21:48.415386 systemd-networkd[1404]: lo: Link UP Jan 13 21:21:48.415398 systemd-networkd[1404]: lo: Gained carrier Jan 13 21:21:48.417079 systemd-networkd[1404]: Enumeration completed Jan 13 21:21:48.417171 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:48.418560 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:48.418562 systemd[1]: Reached target network.target - Network. Jan 13 21:21:48.418570 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:48.421941 systemd-networkd[1404]: eth0: Link UP Jan 13 21:21:48.421952 systemd-networkd[1404]: eth0: Gained carrier Jan 13 21:21:48.421964 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:48.430535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:21:48.432167 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:21:48.433863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:21:48.440103 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:21:48.441298 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:21:48.446467 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:21:48.447183 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 13 21:21:48.448300 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:21:48.448354 systemd-timesyncd[1405]: Initial clock synchronization to Mon 2025-01-13 21:21:48.434493 UTC. Jan 13 21:21:48.449687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.455575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:48.455821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.462975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.609710 kernel: kvm_amd: TSC scaling supported Jan 13 21:21:48.609806 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:21:48.609828 kernel: kvm_amd: Nested Paging enabled Jan 13 21:21:48.610940 kernel: kvm_amd: LBR virtualization supported Jan 13 21:21:48.610969 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:21:48.611677 kernel: kvm_amd: Virtual GIF supported Jan 13 21:21:48.634316 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:21:48.650297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.672147 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:21:48.685462 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:21:48.696224 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:48.727766 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:21:48.729371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:48.730531 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:48.731813 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:21:48.733162 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:21:48.734718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:21:48.735992 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:21:48.737295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:21:48.738576 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:21:48.738622 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:48.739559 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:48.741161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:21:48.744130 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:21:48.760321 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:21:48.762982 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:21:48.780206 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:21:48.781477 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:48.782444 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:48.783410 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:48.783436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:48.784574 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:21:48.786902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:21:48.790419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:21:48.792863 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:48.797310 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:21:48.798650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:21:48.799440 jq[1438]: false Jan 13 21:21:48.801045 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:21:48.806446 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:21:48.810797 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:21:48.814461 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:21:48.816402 extend-filesystems[1439]: Found loop3 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found loop4 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found loop5 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found sr0 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda1 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda2 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda3 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found usr Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda4 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda6 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda7 Jan 13 21:21:48.816402 extend-filesystems[1439]: Found vda9 Jan 13 21:21:48.816402 extend-filesystems[1439]: Checking size of /dev/vda9 Jan 13 21:21:48.814968 dbus-daemon[1437]: [system] SELinux support is enabled Jan 13 21:21:48.831585 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:21:48.833434 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:21:48.834080 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:21:48.835469 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:21:48.839189 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:21:48.841237 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:21:48.845502 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:21:48.847467 jq[1456]: true Jan 13 21:21:48.848808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:21:48.850566 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:21:48.851051 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:21:48.851368 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:21:48.857493 extend-filesystems[1439]: Resized partition /dev/vda9 Jan 13 21:21:48.859194 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:21:48.860602 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:21:48.869611 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:21:48.873508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Jan 13 21:21:48.888333 jq[1463]: true Jan 13 21:21:48.893977 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:21:48.904903 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:21:48.904945 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:21:48.906379 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:21:48.906406 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:21:48.929342 tar[1461]: linux-amd64/helm Jan 13 21:21:48.934936 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:21:48.934966 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:21:48.936245 update_engine[1454]: I20250113 21:21:48.935915 1454 main.cc:92] Flatcar Update Engine starting Jan 13 21:21:48.936015 systemd-logind[1451]: New seat seat0. Jan 13 21:21:48.936980 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:21:48.951726 update_engine[1454]: I20250113 21:21:48.951670 1454 update_check_scheduler.cc:74] Next update check in 5m11s Jan 13 21:21:48.951709 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:21:48.959757 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:21:48.975309 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:21:49.038331 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:21:49.105608 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:21:49.134710 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:21:49.143362 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:21:49.143612 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:21:49.146435 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:21:49.233921 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:21:49.242774 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:21:49.256852 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:21:49.258785 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:21:49.259533 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:21:49.266347 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:21:49.294661 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:21:49.294661 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:21:49.294661 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:21:49.300831 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jan 13 21:21:49.300918 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:21:49.295790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:21:49.296089 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:21:49.303177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:21:49.308796 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:21:49.589468 containerd[1470]: time="2025-01-13T21:21:49.589233116Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:21:49.647513 containerd[1470]: time="2025-01-13T21:21:49.647443799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.649765 containerd[1470]: time="2025-01-13T21:21:49.649514051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:49.649765 containerd[1470]: time="2025-01-13T21:21:49.649557315Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:21:49.649765 containerd[1470]: time="2025-01-13T21:21:49.649578136Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:21:49.649848 containerd[1470]: time="2025-01-13T21:21:49.649824538Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:21:49.649883 containerd[1470]: time="2025-01-13T21:21:49.649853427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.649973 containerd[1470]: time="2025-01-13T21:21:49.649947624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:49.649993 containerd[1470]: time="2025-01-13T21:21:49.649971738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650298 containerd[1470]: time="2025-01-13T21:21:49.650245768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650331 containerd[1470]: time="2025-01-13T21:21:49.650297270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650331 containerd[1470]: time="2025-01-13T21:21:49.650315959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650369 containerd[1470]: time="2025-01-13T21:21:49.650328222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650472 containerd[1470]: time="2025-01-13T21:21:49.650449516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650769 containerd[1470]: time="2025-01-13T21:21:49.650744126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650968 containerd[1470]: time="2025-01-13T21:21:49.650913099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:49.650989 containerd[1470]: time="2025-01-13T21:21:49.650965932Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:21:49.651112 containerd[1470]: time="2025-01-13T21:21:49.651088437Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:21:49.651184 containerd[1470]: time="2025-01-13T21:21:49.651164135Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:21:49.727657 tar[1461]: linux-amd64/LICENSE Jan 13 21:21:49.727792 tar[1461]: linux-amd64/README.md Jan 13 21:21:49.749133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:21:49.896939 containerd[1470]: time="2025-01-13T21:21:49.896739347Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:21:49.896939 containerd[1470]: time="2025-01-13T21:21:49.896861122Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:21:49.896939 containerd[1470]: time="2025-01-13T21:21:49.896882464Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:21:49.896939 containerd[1470]: time="2025-01-13T21:21:49.896915947Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:21:49.896939 containerd[1470]: time="2025-01-13T21:21:49.896939281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:21:49.897232 containerd[1470]: time="2025-01-13T21:21:49.897205082Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:21:49.897628 containerd[1470]: time="2025-01-13T21:21:49.897580765Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:21:49.897770 containerd[1470]: time="2025-01-13T21:21:49.897726254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:21:49.897770 containerd[1470]: time="2025-01-13T21:21:49.897753582Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:21:49.897866 containerd[1470]: time="2025-01-13T21:21:49.897770580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:21:49.897866 containerd[1470]: time="2025-01-13T21:21:49.897787727Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.897866 containerd[1470]: time="2025-01-13T21:21:49.897821191Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.897866 containerd[1470]: time="2025-01-13T21:21:49.897845986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.897866 containerd[1470]: time="2025-01-13T21:21:49.897863875Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897889711Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897908000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897923556Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897937820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897977361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898003 containerd[1470]: time="2025-01-13T21:21:49.897996329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898012786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898028022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898043037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898069284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898084630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898103919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898119566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898137033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898151988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898166424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898176 containerd[1470]: time="2025-01-13T21:21:49.898181238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898205924Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898230519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898245795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898261961Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898365166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898390732Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898405077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898420342Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898432435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898448471Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898461845Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:21:49.898524 containerd[1470]: time="2025-01-13T21:21:49.898475148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:21:49.898928 containerd[1470]: time="2025-01-13T21:21:49.898832934Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:21:49.899136 containerd[1470]: time="2025-01-13T21:21:49.898969222Z" level=info msg="Connect containerd service" Jan 13 21:21:49.899136 containerd[1470]: time="2025-01-13T21:21:49.899014669Z" level=info msg="using legacy CRI server" Jan 13 21:21:49.899136 containerd[1470]: time="2025-01-13T21:21:49.899023077Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:21:49.899224 containerd[1470]: time="2025-01-13T21:21:49.899146944Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:21:49.899946 containerd[1470]: time="2025-01-13T21:21:49.899914917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:21:49.900199 containerd[1470]: time="2025-01-13T21:21:49.900120887Z" level=info msg="Start subscribing containerd event" Jan 13 21:21:49.900253 containerd[1470]: time="2025-01-13T21:21:49.900231961Z" level=info msg="Start recovering state" Jan 13 21:21:49.900406 containerd[1470]: time="2025-01-13T21:21:49.900360642Z" level=info msg="Start event monitor" Jan 13 21:21:49.900406 containerd[1470]: time="2025-01-13T21:21:49.900371273Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:21:49.900549 containerd[1470]: time="2025-01-13T21:21:49.900420753Z" level=info msg="Start snapshots syncer" Jan 13 21:21:49.900549 containerd[1470]: time="2025-01-13T21:21:49.900441004Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:21:49.900549 containerd[1470]: time="2025-01-13T21:21:49.900450965Z" level=info msg="Start streaming server" Jan 13 21:21:49.900549 containerd[1470]: time="2025-01-13T21:21:49.900469413Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:21:49.900625 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:21:49.901881 containerd[1470]: time="2025-01-13T21:21:49.901644592Z" level=info msg="containerd successfully booted in 0.316841s" Jan 13 21:21:49.987651 systemd-networkd[1404]: eth0: Gained IPv6LL Jan 13 21:21:49.991850 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:21:49.994277 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:21:50.008697 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:21:50.012320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:50.015016 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:21:50.045610 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:21:50.054766 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:21:50.055048 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:21:50.057961 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:21:51.317680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:51.319512 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:21:51.320909 systemd[1]: Startup finished in 1.377s (kernel) + 7.828s (initrd) + 5.565s (userspace) = 14.771s. Jan 13 21:21:51.330787 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:52.118193 kubelet[1550]: E0113 21:21:52.118116 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:52.122083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:52.122295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:52.122612 systemd[1]: kubelet.service: Consumed 1.835s CPU time. Jan 13 21:21:53.337949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:21:53.339106 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:33358.service - OpenSSH per-connection server daemon (10.0.0.1:33358). Jan 13 21:21:53.380834 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 33358 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:53.382924 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:53.391690 systemd-logind[1451]: New session 1 of user core. Jan 13 21:21:53.392971 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:21:53.408487 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:21:53.419882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:21:53.435525 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:21:53.438227 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:21:53.538720 systemd[1567]: Queued start job for default target default.target. Jan 13 21:21:53.548628 systemd[1567]: Created slice app.slice - User Application Slice. Jan 13 21:21:53.548656 systemd[1567]: Reached target paths.target - Paths. Jan 13 21:21:53.548670 systemd[1567]: Reached target timers.target - Timers. Jan 13 21:21:53.550381 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:21:53.562372 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:21:53.562531 systemd[1567]: Reached target sockets.target - Sockets. Jan 13 21:21:53.562556 systemd[1567]: Reached target basic.target - Basic System. Jan 13 21:21:53.562603 systemd[1567]: Reached target default.target - Main User Target. Jan 13 21:21:53.562645 systemd[1567]: Startup finished in 117ms. Jan 13 21:21:53.563031 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:21:53.564496 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:21:53.625245 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:33364.service - OpenSSH per-connection server daemon (10.0.0.1:33364). Jan 13 21:21:53.658328 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33364 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:53.659952 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:53.663927 systemd-logind[1451]: New session 2 of user core. Jan 13 21:21:53.673417 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:21:53.727699 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:53.738114 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:33364.service: Deactivated successfully. Jan 13 21:21:53.739723 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:21:53.741231 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:21:53.770552 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:33368.service - OpenSSH per-connection server daemon (10.0.0.1:33368). Jan 13 21:21:53.771486 systemd-logind[1451]: Removed session 2. Jan 13 21:21:53.799000 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 33368 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:53.800600 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:53.804280 systemd-logind[1451]: New session 3 of user core. Jan 13 21:21:53.813400 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:21:53.862997 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:53.876108 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:33368.service: Deactivated successfully. Jan 13 21:21:53.877807 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:21:53.879061 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:21:53.887539 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:33374.service - OpenSSH per-connection server daemon (10.0.0.1:33374). Jan 13 21:21:53.888617 systemd-logind[1451]: Removed session 3. Jan 13 21:21:53.915241 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 33374 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:53.916671 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:53.920939 systemd-logind[1451]: New session 4 of user core. Jan 13 21:21:53.934516 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:21:53.989155 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:53.999921 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:33374.service: Deactivated successfully. Jan 13 21:21:54.001616 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:21:54.003225 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:21:54.004427 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:33376.service - OpenSSH per-connection server daemon (10.0.0.1:33376). Jan 13 21:21:54.005196 systemd-logind[1451]: Removed session 4. Jan 13 21:21:54.039244 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 33376 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:54.041107 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:54.045491 systemd-logind[1451]: New session 5 of user core. Jan 13 21:21:54.054442 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:21:54.113537 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:21:54.113952 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:54.138961 sudo[1602]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:54.141016 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:54.153721 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:33376.service: Deactivated successfully. Jan 13 21:21:54.155537 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:21:54.157313 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:21:54.166987 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:33390.service - OpenSSH per-connection server daemon (10.0.0.1:33390). Jan 13 21:21:54.168151 systemd-logind[1451]: Removed session 5. Jan 13 21:21:54.198392 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 33390 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:54.200212 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:54.205334 systemd-logind[1451]: New session 6 of user core. Jan 13 21:21:54.214562 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:21:54.270026 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:21:54.270371 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:54.274035 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:54.280186 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:21:54.280541 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:54.297552 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:54.299325 auditctl[1614]: No rules Jan 13 21:21:54.300455 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:21:54.300704 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:54.302395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:54.331458 augenrules[1632]: No rules Jan 13 21:21:54.332940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:54.334141 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:54.336075 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:54.345197 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:33390.service: Deactivated successfully. Jan 13 21:21:54.346875 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:21:54.348186 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:21:54.355576 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). Jan 13 21:21:54.356589 systemd-logind[1451]: Removed session 6. Jan 13 21:21:54.385945 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:21:54.387773 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:54.391737 systemd-logind[1451]: New session 7 of user core. Jan 13 21:21:54.408515 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:21:54.461599 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:21:54.461949 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:55.309713 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:21:55.309787 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:21:55.605390 dockerd[1661]: time="2025-01-13T21:21:55.602990988Z" level=info msg="Starting up" Jan 13 21:21:57.309007 dockerd[1661]: time="2025-01-13T21:21:57.308961907Z" level=info msg="Loading containers: start." Jan 13 21:21:57.538331 kernel: Initializing XFRM netlink socket Jan 13 21:21:57.624304 systemd-networkd[1404]: docker0: Link UP Jan 13 21:21:57.687991 dockerd[1661]: time="2025-01-13T21:21:57.687929635Z" level=info msg="Loading containers: done." Jan 13 21:21:57.702461 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2619248041-merged.mount: Deactivated successfully. Jan 13 21:21:57.801477 dockerd[1661]: time="2025-01-13T21:21:57.801395300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:21:57.801620 dockerd[1661]: time="2025-01-13T21:21:57.801591748Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:21:57.801769 dockerd[1661]: time="2025-01-13T21:21:57.801737277Z" level=info msg="Daemon has completed initialization" Jan 13 21:21:57.876115 dockerd[1661]: time="2025-01-13T21:21:57.875978590Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:21:57.876213 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:21:58.526917 containerd[1470]: time="2025-01-13T21:21:58.526869620Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:22:00.971251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388394959.mount: Deactivated successfully. Jan 13 21:22:02.372609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:02.379639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:02.574205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:02.579174 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:02.622501 kubelet[1871]: E0113 21:22:02.622421 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:02.628751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:02.628981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:03.077528 containerd[1470]: time="2025-01-13T21:22:03.077409910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:03.078413 containerd[1470]: time="2025-01-13T21:22:03.078365591Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 21:22:03.079831 containerd[1470]: time="2025-01-13T21:22:03.079792583Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:03.082623 containerd[1470]: time="2025-01-13T21:22:03.082590362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:03.083471 containerd[1470]: time="2025-01-13T21:22:03.083435214Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 4.55652163s" Jan 13 21:22:03.083471 containerd[1470]: time="2025-01-13T21:22:03.083468855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:22:03.084744 containerd[1470]: time="2025-01-13T21:22:03.084723554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:22:04.561104 containerd[1470]: time="2025-01-13T21:22:04.561021300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:04.561881 containerd[1470]: time="2025-01-13T21:22:04.561834531Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 21:22:04.563045 containerd[1470]: time="2025-01-13T21:22:04.562990975Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:04.566409 containerd[1470]: time="2025-01-13T21:22:04.566329632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:04.567392 containerd[1470]: time="2025-01-13T21:22:04.567359609Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.482609705s" Jan 13 21:22:04.567447 containerd[1470]: time="2025-01-13T21:22:04.567392100Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:22:04.567971 containerd[1470]: time="2025-01-13T21:22:04.567939439Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:22:05.730208 containerd[1470]: time="2025-01-13T21:22:05.730141336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.731148 containerd[1470]: time="2025-01-13T21:22:05.731043484Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 21:22:05.733916 containerd[1470]: time="2025-01-13T21:22:05.733869434Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.737405 containerd[1470]: time="2025-01-13T21:22:05.737359056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.738500 containerd[1470]: time="2025-01-13T21:22:05.738463832Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.170488368s" Jan 13 21:22:05.738541 containerd[1470]: time="2025-01-13T21:22:05.738500329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:22:05.739459 containerd[1470]: time="2025-01-13T21:22:05.739437341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:22:08.461548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173999784.mount: Deactivated successfully. Jan 13 21:22:09.588025 containerd[1470]: time="2025-01-13T21:22:09.587939947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.589163 containerd[1470]: time="2025-01-13T21:22:09.589120113Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:22:09.590798 containerd[1470]: time="2025-01-13T21:22:09.590754404Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.593103 containerd[1470]: time="2025-01-13T21:22:09.593066386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.593685 containerd[1470]: time="2025-01-13T21:22:09.593627947Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 3.854163322s" Jan 13 21:22:09.593685 containerd[1470]: time="2025-01-13T21:22:09.593675615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:22:09.594149 containerd[1470]: time="2025-01-13T21:22:09.594111159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:22:10.836833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895512238.mount: Deactivated successfully. Jan 13 21:22:11.607232 containerd[1470]: time="2025-01-13T21:22:11.607175439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:11.608356 containerd[1470]: time="2025-01-13T21:22:11.608304743Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:22:11.609731 containerd[1470]: time="2025-01-13T21:22:11.609698829Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:11.612619 containerd[1470]: time="2025-01-13T21:22:11.612586809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:11.613795 containerd[1470]: time="2025-01-13T21:22:11.613756541Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.019573863s" Jan 13 21:22:11.613795 containerd[1470]: time="2025-01-13T21:22:11.613789967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:22:11.614364 containerd[1470]: time="2025-01-13T21:22:11.614329226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:22:12.441403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602008190.mount: Deactivated successfully. Jan 13 21:22:12.447375 containerd[1470]: time="2025-01-13T21:22:12.447331289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:12.448041 containerd[1470]: time="2025-01-13T21:22:12.447986291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 21:22:12.449217 containerd[1470]: time="2025-01-13T21:22:12.449163730Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:12.451245 containerd[1470]: time="2025-01-13T21:22:12.451205684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:12.451970 containerd[1470]: time="2025-01-13T21:22:12.451937896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 837.494819ms" Jan 13 21:22:12.452039 containerd[1470]: time="2025-01-13T21:22:12.451969970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:22:12.452627 containerd[1470]: time="2025-01-13T21:22:12.452445459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:22:12.879194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:22:12.888501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:13.035403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:13.039754 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:13.079906 kubelet[1952]: E0113 21:22:13.079845 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:13.084060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:13.084302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:13.380066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380798571.mount: Deactivated successfully. Jan 13 21:22:15.591478 containerd[1470]: time="2025-01-13T21:22:15.591404502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.592550 containerd[1470]: time="2025-01-13T21:22:15.592465894Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 21:22:15.593823 containerd[1470]: time="2025-01-13T21:22:15.593796306Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.596946 containerd[1470]: time="2025-01-13T21:22:15.596914016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.598252 containerd[1470]: time="2025-01-13T21:22:15.598195565Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.145719323s" Jan 13 21:22:15.598252 containerd[1470]: time="2025-01-13T21:22:15.598240412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:22:17.812050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:17.829535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:17.854503 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)... Jan 13 21:22:17.854525 systemd[1]: Reloading... Jan 13 21:22:17.936310 zram_generator::config[2085]: No configuration found. Jan 13 21:22:18.432679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:18.513330 systemd[1]: Reloading finished in 658 ms. Jan 13 21:22:18.560629 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:22:18.560747 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:22:18.561055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:18.562685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:18.710083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:18.714660 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:22:18.757199 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:18.757199 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:22:18.757199 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:18.757592 kubelet[2130]: I0113 21:22:18.757238 2130 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:22:18.882028 kubelet[2130]: I0113 21:22:18.881988 2130 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:22:18.882028 kubelet[2130]: I0113 21:22:18.882016 2130 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:22:18.882251 kubelet[2130]: I0113 21:22:18.882233 2130 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:22:18.907737 kubelet[2130]: I0113 21:22:18.907685 2130 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:18.911668 kubelet[2130]: E0113 21:22:18.909532 2130 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:18.919927 kubelet[2130]: E0113 21:22:18.919890 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:22:18.919927 kubelet[2130]: I0113 21:22:18.919926 2130 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:22:18.926023 kubelet[2130]: I0113 21:22:18.925999 2130 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:22:18.927292 kubelet[2130]: I0113 21:22:18.927252 2130 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:22:18.927461 kubelet[2130]: I0113 21:22:18.927426 2130 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:22:18.927648 kubelet[2130]: I0113 21:22:18.927451 2130 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:22:18.927648 kubelet[2130]: I0113 21:22:18.927645 2130 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:22:18.927757 kubelet[2130]: I0113 21:22:18.927655 2130 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:22:18.927815 kubelet[2130]: I0113 21:22:18.927795 2130 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:18.929494 kubelet[2130]: I0113 21:22:18.929466 2130 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:22:18.929494 kubelet[2130]: I0113 21:22:18.929490 2130 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:22:18.929555 kubelet[2130]: I0113 21:22:18.929541 2130 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:22:18.929578 kubelet[2130]: I0113 21:22:18.929561 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:22:18.933004 kubelet[2130]: W0113 21:22:18.932888 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:18.933004 kubelet[2130]: E0113 21:22:18.932973 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:18.933845 kubelet[2130]: W0113 21:22:18.933790 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:18.933845 kubelet[2130]: E0113 21:22:18.933834 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:18.936783 kubelet[2130]: I0113 21:22:18.936753 2130 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:22:18.938831 kubelet[2130]: I0113 21:22:18.938807 2130 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:22:18.939430 kubelet[2130]: W0113 21:22:18.939406 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:22:18.940107 kubelet[2130]: I0113 21:22:18.940083 2130 server.go:1269] "Started kubelet" Jan 13 21:22:18.942165 kubelet[2130]: I0113 21:22:18.940333 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:22:18.942165 kubelet[2130]: I0113 21:22:18.940768 2130 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:22:18.942165 kubelet[2130]: I0113 21:22:18.940831 2130 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:22:18.942165 kubelet[2130]: I0113 21:22:18.941708 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:22:18.942165 kubelet[2130]: I0113 21:22:18.941745 2130 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:22:18.943602 kubelet[2130]: I0113 21:22:18.942495 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:22:18.943602 kubelet[2130]: I0113 21:22:18.943147 2130 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:22:18.943602 kubelet[2130]: I0113 21:22:18.943261 2130 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:22:19.081072 kubelet[2130]: W0113 21:22:18.944913 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:19.081072 kubelet[2130]: E0113 21:22:18.944979 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:19.081072 kubelet[2130]: I0113 21:22:19.079532 2130 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:22:19.081072 kubelet[2130]: E0113 21:22:18.945198 2130 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:22:19.081072 kubelet[2130]: E0113 21:22:19.079743 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Jan 13 21:22:19.081072 kubelet[2130]: E0113 21:22:19.080157 2130 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:22:19.082435 kubelet[2130]: I0113 21:22:19.081913 2130 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:22:19.082435 kubelet[2130]: I0113 21:22:19.082039 2130 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:22:19.085332 kubelet[2130]: I0113 21:22:19.084469 2130 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:22:19.088307 kubelet[2130]: E0113 21:22:19.084057 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d6f4e3c0589 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:22:18.940056969 +0000 UTC m=+0.221849951,LastTimestamp:2025-01-13 21:22:18.940056969 +0000 UTC m=+0.221849951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:22:19.101374 kubelet[2130]: I0113 21:22:19.101348 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:22:19.101604 kubelet[2130]: I0113 21:22:19.101587 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:22:19.101688 kubelet[2130]: I0113 21:22:19.101675 2130 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:19.102716 kubelet[2130]: I0113 21:22:19.102677 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:22:19.103995 kubelet[2130]: I0113 21:22:19.103976 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:22:19.104049 kubelet[2130]: I0113 21:22:19.104018 2130 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:22:19.104049 kubelet[2130]: I0113 21:22:19.104037 2130 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:22:19.105113 kubelet[2130]: E0113 21:22:19.104096 2130 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:22:19.180595 kubelet[2130]: E0113 21:22:19.180540 2130 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:22:19.204894 kubelet[2130]: E0113 21:22:19.204847 2130 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:22:19.280822 kubelet[2130]: E0113 21:22:19.280745 2130 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:22:19.281201 kubelet[2130]: E0113 21:22:19.281152 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Jan 13 21:22:19.381663 kubelet[2130]: E0113 21:22:19.381513 2130 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:22:19.391734 kubelet[2130]: I0113 21:22:19.391619 2130 policy_none.go:49] "None policy: Start" Jan 13 21:22:19.391734 kubelet[2130]: W0113 21:22:19.391659 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:19.391734 kubelet[2130]: E0113 21:22:19.391729 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:19.392631 kubelet[2130]: I0113 21:22:19.392597 2130 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:22:19.392685 kubelet[2130]: I0113 21:22:19.392636 2130 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:22:19.404116 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:22:19.405779 kubelet[2130]: E0113 21:22:19.405745 2130 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:22:19.422800 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:22:19.425709 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:22:19.433397 kubelet[2130]: I0113 21:22:19.433351 2130 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:22:19.433676 kubelet[2130]: I0113 21:22:19.433657 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:22:19.433725 kubelet[2130]: I0113 21:22:19.433679 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:22:19.433949 kubelet[2130]: I0113 21:22:19.433925 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:22:19.435395 kubelet[2130]: E0113 21:22:19.435366 2130 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:22:19.535311 kubelet[2130]: I0113 21:22:19.535223 2130 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:19.535568 kubelet[2130]: E0113 21:22:19.535542 2130 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jan 13 21:22:19.681963 kubelet[2130]: E0113 21:22:19.681815 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Jan 13 21:22:19.737315 kubelet[2130]: I0113 21:22:19.737263 2130 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:19.737656 kubelet[2130]: E0113 21:22:19.737616 2130 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jan 13 21:22:19.814778 systemd[1]: Created slice kubepods-burstable-pode4d402ad46be5b8209c97753d5d6a31a.slice - libcontainer container kubepods-burstable-pode4d402ad46be5b8209c97753d5d6a31a.slice. Jan 13 21:22:19.843681 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 21:22:19.851327 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 21:22:19.885270 kubelet[2130]: I0113 21:22:19.885227 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:19.885270 kubelet[2130]: I0113 21:22:19.885264 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:19.885270 kubelet[2130]: I0113 21:22:19.885303 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:19.885815 kubelet[2130]: I0113 21:22:19.885322 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:19.885815 kubelet[2130]: I0113 21:22:19.885337 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:19.885815 kubelet[2130]: I0113 21:22:19.885351 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:19.885815 kubelet[2130]: I0113 21:22:19.885366 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:19.885815 kubelet[2130]: I0113 21:22:19.885411 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:19.885919 kubelet[2130]: I0113 21:22:19.885504 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:22:20.139194 kubelet[2130]: I0113 21:22:20.139097 2130 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:20.139492 kubelet[2130]: E0113 21:22:20.139465 2130 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jan 13 21:22:20.140669 kubelet[2130]: E0113 21:22:20.140642 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:20.141206 containerd[1470]: time="2025-01-13T21:22:20.141175049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4d402ad46be5b8209c97753d5d6a31a,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:20.146575 kubelet[2130]: E0113 21:22:20.146544 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:20.146999 containerd[1470]: time="2025-01-13T21:22:20.146958467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:20.154266 kubelet[2130]: E0113 21:22:20.154232 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:20.154774 containerd[1470]: time="2025-01-13T21:22:20.154726177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:20.161262 kubelet[2130]: W0113 21:22:20.161204 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:20.161401 kubelet[2130]: E0113 21:22:20.161269 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:20.307720 kubelet[2130]: W0113 21:22:20.307650 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:20.307720 kubelet[2130]: E0113 21:22:20.307722 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:20.468699 kubelet[2130]: W0113 21:22:20.468552 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:20.468699 kubelet[2130]: E0113 21:22:20.468628 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:20.482224 kubelet[2130]: E0113 21:22:20.482176 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Jan 13 21:22:20.727815 kubelet[2130]: W0113 21:22:20.727702 2130 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jan 13 21:22:20.727815 kubelet[2130]: E0113 21:22:20.727758 2130 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:20.941671 kubelet[2130]: I0113 21:22:20.941626 2130 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:20.942098 kubelet[2130]: E0113 21:22:20.941887 2130 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jan 13 21:22:21.071384 kubelet[2130]: E0113 21:22:21.071251 2130 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:21.983559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454370220.mount: Deactivated successfully. Jan 13 21:22:21.989951 containerd[1470]: time="2025-01-13T21:22:21.989890269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:21.991785 containerd[1470]: time="2025-01-13T21:22:21.991668201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:21.992605 containerd[1470]: time="2025-01-13T21:22:21.992570395Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:21.993731 containerd[1470]: time="2025-01-13T21:22:21.993676049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:21.994946 containerd[1470]: time="2025-01-13T21:22:21.994884404Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:21.995543 containerd[1470]: time="2025-01-13T21:22:21.995499731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:21.996339 containerd[1470]: time="2025-01-13T21:22:21.996269802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:22:21.998790 containerd[1470]: time="2025-01-13T21:22:21.998753992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:22.001548 containerd[1470]: time="2025-01-13T21:22:22.001510183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.860261164s" Jan 13 21:22:22.002533 containerd[1470]: time="2025-01-13T21:22:22.002489991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.847679517s" Jan 13 21:22:22.003318 containerd[1470]: time="2025-01-13T21:22:22.003270316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.856228973s" Jan 13 21:22:22.083253 kubelet[2130]: E0113 21:22:22.083203 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="3.2s" Jan 13 21:22:22.150359 containerd[1470]: time="2025-01-13T21:22:22.150164366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:22.150359 containerd[1470]: time="2025-01-13T21:22:22.150307299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:22.150359 containerd[1470]: time="2025-01-13T21:22:22.150328938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.150716 containerd[1470]: time="2025-01-13T21:22:22.150554839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:22.150716 containerd[1470]: time="2025-01-13T21:22:22.150607693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:22.150716 containerd[1470]: time="2025-01-13T21:22:22.150628239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.150843 containerd[1470]: time="2025-01-13T21:22:22.150737963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.150973 containerd[1470]: time="2025-01-13T21:22:22.150913195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.155356 containerd[1470]: time="2025-01-13T21:22:22.154557349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:22.155356 containerd[1470]: time="2025-01-13T21:22:22.154606756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:22.155356 containerd[1470]: time="2025-01-13T21:22:22.154621071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.155356 containerd[1470]: time="2025-01-13T21:22:22.154714006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:22.177471 systemd[1]: Started cri-containerd-7e24b2b28b5285f81670e8adba6235fdee32095835524396d7aa6c3aa802397d.scope - libcontainer container 7e24b2b28b5285f81670e8adba6235fdee32095835524396d7aa6c3aa802397d. Jan 13 21:22:22.182585 systemd[1]: Started cri-containerd-4c0bd6642cad9fca19773739c75d81d7578769cd8d571e710ce57493fbc3fdfa.scope - libcontainer container 4c0bd6642cad9fca19773739c75d81d7578769cd8d571e710ce57493fbc3fdfa. Jan 13 21:22:22.184901 systemd[1]: Started cri-containerd-9efa8ec7ad43091703e81c12b46bd1098589c66c2c25f790da4dd2a283264dc6.scope - libcontainer container 9efa8ec7ad43091703e81c12b46bd1098589c66c2c25f790da4dd2a283264dc6. Jan 13 21:22:22.231909 containerd[1470]: time="2025-01-13T21:22:22.231856382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e24b2b28b5285f81670e8adba6235fdee32095835524396d7aa6c3aa802397d\"" Jan 13 21:22:22.232157 containerd[1470]: time="2025-01-13T21:22:22.232099002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4d402ad46be5b8209c97753d5d6a31a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efa8ec7ad43091703e81c12b46bd1098589c66c2c25f790da4dd2a283264dc6\"" Jan 13 21:22:22.234317 kubelet[2130]: E0113 21:22:22.233831 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:22.234317 kubelet[2130]: E0113 21:22:22.234147 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:22.238660 containerd[1470]: time="2025-01-13T21:22:22.237666448Z" level=info msg="CreateContainer within sandbox \"7e24b2b28b5285f81670e8adba6235fdee32095835524396d7aa6c3aa802397d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:22:22.238660 containerd[1470]: time="2025-01-13T21:22:22.237719662Z" level=info msg="CreateContainer within sandbox \"9efa8ec7ad43091703e81c12b46bd1098589c66c2c25f790da4dd2a283264dc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:22:22.238660 containerd[1470]: time="2025-01-13T21:22:22.238070363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0bd6642cad9fca19773739c75d81d7578769cd8d571e710ce57493fbc3fdfa\"" Jan 13 21:22:22.239380 kubelet[2130]: E0113 21:22:22.239354 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:22.252971 containerd[1470]: time="2025-01-13T21:22:22.252848157Z" level=info msg="CreateContainer within sandbox \"4c0bd6642cad9fca19773739c75d81d7578769cd8d571e710ce57493fbc3fdfa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:22:22.257933 containerd[1470]: time="2025-01-13T21:22:22.257691277Z" level=info msg="CreateContainer within sandbox \"9efa8ec7ad43091703e81c12b46bd1098589c66c2c25f790da4dd2a283264dc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12efddb0f7c386770d4baeda86ad7a6875ef6d8b0e274ac54c366dd607d10c25\"" Jan 13 21:22:22.258453 containerd[1470]: time="2025-01-13T21:22:22.258419459Z" level=info msg="StartContainer for \"12efddb0f7c386770d4baeda86ad7a6875ef6d8b0e274ac54c366dd607d10c25\"" Jan 13 21:22:22.274916 containerd[1470]: time="2025-01-13T21:22:22.274851406Z" level=info msg="CreateContainer within sandbox \"7e24b2b28b5285f81670e8adba6235fdee32095835524396d7aa6c3aa802397d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b93a6795bb5ef6615faf0108805c1207f858e952c90048607376f86ce832a18\"" Jan 13 21:22:22.275559 containerd[1470]: time="2025-01-13T21:22:22.275510986Z" level=info msg="StartContainer for \"8b93a6795bb5ef6615faf0108805c1207f858e952c90048607376f86ce832a18\"" Jan 13 21:22:22.277043 containerd[1470]: time="2025-01-13T21:22:22.277004685Z" level=info msg="CreateContainer within sandbox \"4c0bd6642cad9fca19773739c75d81d7578769cd8d571e710ce57493fbc3fdfa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4941e732d3c70afc71102577646b4e1c99879d40a7a35dd5e10f43694df5b104\"" Jan 13 21:22:22.277560 containerd[1470]: time="2025-01-13T21:22:22.277514429Z" level=info msg="StartContainer for \"4941e732d3c70afc71102577646b4e1c99879d40a7a35dd5e10f43694df5b104\"" Jan 13 21:22:22.287442 systemd[1]: Started cri-containerd-12efddb0f7c386770d4baeda86ad7a6875ef6d8b0e274ac54c366dd607d10c25.scope - libcontainer container 12efddb0f7c386770d4baeda86ad7a6875ef6d8b0e274ac54c366dd607d10c25. Jan 13 21:22:22.306513 systemd[1]: Started cri-containerd-8b93a6795bb5ef6615faf0108805c1207f858e952c90048607376f86ce832a18.scope - libcontainer container 8b93a6795bb5ef6615faf0108805c1207f858e952c90048607376f86ce832a18. Jan 13 21:22:22.326545 systemd[1]: Started cri-containerd-4941e732d3c70afc71102577646b4e1c99879d40a7a35dd5e10f43694df5b104.scope - libcontainer container 4941e732d3c70afc71102577646b4e1c99879d40a7a35dd5e10f43694df5b104. Jan 13 21:22:22.352496 containerd[1470]: time="2025-01-13T21:22:22.352424745Z" level=info msg="StartContainer for \"12efddb0f7c386770d4baeda86ad7a6875ef6d8b0e274ac54c366dd607d10c25\" returns successfully" Jan 13 21:22:22.373003 containerd[1470]: time="2025-01-13T21:22:22.372326788Z" level=info msg="StartContainer for \"8b93a6795bb5ef6615faf0108805c1207f858e952c90048607376f86ce832a18\" returns successfully" Jan 13 21:22:22.383371 containerd[1470]: time="2025-01-13T21:22:22.383229637Z" level=info msg="StartContainer for \"4941e732d3c70afc71102577646b4e1c99879d40a7a35dd5e10f43694df5b104\" returns successfully" Jan 13 21:22:22.546405 kubelet[2130]: I0113 21:22:22.544822 2130 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:23.120312 kubelet[2130]: E0113 21:22:23.119310 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:23.123303 kubelet[2130]: E0113 21:22:23.121948 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:23.124476 kubelet[2130]: E0113 21:22:23.124443 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:23.548034 kubelet[2130]: I0113 21:22:23.547560 2130 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:22:23.548034 kubelet[2130]: E0113 21:22:23.547664 2130 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 21:22:23.932530 kubelet[2130]: I0113 21:22:23.932364 2130 apiserver.go:52] "Watching apiserver" Jan 13 21:22:23.944100 kubelet[2130]: I0113 21:22:23.944051 2130 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:22:24.131017 kubelet[2130]: E0113 21:22:24.130963 2130 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:24.131467 kubelet[2130]: E0113 21:22:24.131190 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:25.231476 kubelet[2130]: E0113 21:22:25.231422 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:26.128692 kubelet[2130]: E0113 21:22:26.128650 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:27.008737 kubelet[2130]: E0113 21:22:27.008701 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:27.130789 kubelet[2130]: E0113 21:22:27.130750 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:27.152111 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Jan 13 21:22:27.152129 systemd[1]: Reloading... Jan 13 21:22:27.218307 zram_generator::config[2449]: No configuration found. Jan 13 21:22:27.340430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:27.437104 systemd[1]: Reloading finished in 284 ms. Jan 13 21:22:27.484025 kubelet[2130]: I0113 21:22:27.483964 2130 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:27.484034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:27.507808 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:22:27.508115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:27.508177 systemd[1]: kubelet.service: Consumed 1.247s CPU time, 121.7M memory peak, 0B memory swap peak. Jan 13 21:22:27.517568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:27.692665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:27.699569 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:22:27.756055 kubelet[2494]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:27.756055 kubelet[2494]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:22:27.756055 kubelet[2494]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:27.756558 kubelet[2494]: I0113 21:22:27.756174 2494 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:22:27.762940 kubelet[2494]: I0113 21:22:27.762881 2494 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:22:27.762940 kubelet[2494]: I0113 21:22:27.762916 2494 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:22:27.763192 kubelet[2494]: I0113 21:22:27.763169 2494 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:22:27.765787 kubelet[2494]: I0113 21:22:27.765737 2494 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:22:27.768305 kubelet[2494]: I0113 21:22:27.768257 2494 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:27.772059 kubelet[2494]: E0113 21:22:27.771992 2494 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:22:27.772059 kubelet[2494]: I0113 21:22:27.772059 2494 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:22:27.778742 kubelet[2494]: I0113 21:22:27.778697 2494 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:22:27.778864 kubelet[2494]: I0113 21:22:27.778847 2494 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:22:27.779097 kubelet[2494]: I0113 21:22:27.779038 2494 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:22:27.779348 kubelet[2494]: I0113 21:22:27.779083 2494 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:22:27.779348 kubelet[2494]: I0113 21:22:27.779340 2494 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:22:27.779469 kubelet[2494]: I0113 21:22:27.779353 2494 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:22:27.779469 kubelet[2494]: I0113 21:22:27.779395 2494 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:27.779569 kubelet[2494]: I0113 21:22:27.779549 2494 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:22:27.779675 kubelet[2494]: I0113 21:22:27.779630 2494 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:22:27.779675 kubelet[2494]: I0113 21:22:27.779678 2494 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:22:27.779749 kubelet[2494]: I0113 21:22:27.779696 2494 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:22:27.780745 kubelet[2494]: I0113 21:22:27.780714 2494 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:22:27.781482 kubelet[2494]: I0113 21:22:27.781456 2494 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:22:27.782658 kubelet[2494]: I0113 21:22:27.782627 2494 server.go:1269] "Started kubelet" Jan 13 21:22:27.784454 kubelet[2494]: I0113 21:22:27.784000 2494 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:22:27.784454 kubelet[2494]: I0113 21:22:27.784440 2494 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:22:27.784550 kubelet[2494]: I0113 21:22:27.784509 2494 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:22:27.786146 kubelet[2494]: I0113 21:22:27.786121 2494 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:22:27.789594 kubelet[2494]: I0113 21:22:27.787698 2494 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:22:27.790563 kubelet[2494]: I0113 21:22:27.790538 2494 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:22:27.792435 kubelet[2494]: I0113 21:22:27.792412 2494 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:22:27.793452 kubelet[2494]: I0113 21:22:27.793435 2494 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:22:27.794051 kubelet[2494]: I0113 21:22:27.793792 2494 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:22:27.794586 kubelet[2494]: I0113 21:22:27.794562 2494 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:22:27.794772 kubelet[2494]: I0113 21:22:27.794733 2494 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:22:27.797151 kubelet[2494]: I0113 21:22:27.797121 2494 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:22:27.800586 kubelet[2494]: E0113 21:22:27.800560 2494 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:22:27.805362 kubelet[2494]: I0113 21:22:27.805270 2494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:22:27.806867 kubelet[2494]: I0113 21:22:27.806807 2494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:22:27.806867 kubelet[2494]: I0113 21:22:27.806845 2494 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:22:27.806867 kubelet[2494]: I0113 21:22:27.806867 2494 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:22:27.806997 kubelet[2494]: E0113 21:22:27.806922 2494 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:22:27.838777 kubelet[2494]: I0113 21:22:27.838727 2494 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:22:27.838777 kubelet[2494]: I0113 21:22:27.838755 2494 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:22:27.838777 kubelet[2494]: I0113 21:22:27.838778 2494 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:27.838994 kubelet[2494]: I0113 21:22:27.838970 2494 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:22:27.839048 kubelet[2494]: I0113 21:22:27.838990 2494 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:22:27.839048 kubelet[2494]: I0113 21:22:27.839028 2494 policy_none.go:49] "None policy: Start" Jan 13 21:22:27.839634 kubelet[2494]: I0113 21:22:27.839613 2494 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:22:27.839689 kubelet[2494]: I0113 21:22:27.839646 2494 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:22:27.839935 kubelet[2494]: I0113 21:22:27.839907 2494 state_mem.go:75] "Updated machine memory state" Jan 13 21:22:27.844898 kubelet[2494]: I0113 21:22:27.844752 2494 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:22:27.844977 kubelet[2494]: I0113 21:22:27.844964 2494 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:22:27.845051 kubelet[2494]: I0113 21:22:27.844977 2494 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:22:27.845222 kubelet[2494]: I0113 21:22:27.845197 2494 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:22:27.916269 kubelet[2494]: E0113 21:22:27.916172 2494 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.916691 kubelet[2494]: E0113 21:22:27.916627 2494 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:27.954534 kubelet[2494]: I0113 21:22:27.954399 2494 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:22:27.966121 kubelet[2494]: I0113 21:22:27.966076 2494 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 21:22:27.966250 kubelet[2494]: I0113 21:22:27.966189 2494 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:22:27.994713 kubelet[2494]: I0113 21:22:27.994651 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.994713 kubelet[2494]: I0113 21:22:27.994710 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.994713 kubelet[2494]: I0113 21:22:27.994731 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.994983 kubelet[2494]: I0113 21:22:27.994752 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:27.994983 kubelet[2494]: I0113 21:22:27.994769 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.994983 kubelet[2494]: I0113 21:22:27.994786 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:27.994983 kubelet[2494]: I0113 21:22:27.994802 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:22:27.994983 kubelet[2494]: I0113 21:22:27.994817 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:27.995180 kubelet[2494]: I0113 21:22:27.994863 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d402ad46be5b8209c97753d5d6a31a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d402ad46be5b8209c97753d5d6a31a\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:28.214839 kubelet[2494]: E0113 21:22:28.214706 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.216895 kubelet[2494]: E0113 21:22:28.216838 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.217111 kubelet[2494]: E0113 21:22:28.217081 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.781133 kubelet[2494]: I0113 21:22:28.781093 2494 apiserver.go:52] "Watching apiserver" Jan 13 21:22:28.793595 kubelet[2494]: I0113 21:22:28.793530 2494 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:22:28.830057 kubelet[2494]: E0113 21:22:28.829998 2494 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:22:28.830491 kubelet[2494]: E0113 21:22:28.830473 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.836101 kubelet[2494]: E0113 21:22:28.835605 2494 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:22:28.836101 kubelet[2494]: E0113 21:22:28.835822 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.836101 kubelet[2494]: E0113 21:22:28.835935 2494 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:22:28.836101 kubelet[2494]: E0113 21:22:28.836038 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.891087 kubelet[2494]: I0113 21:22:28.890925 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.890883772 podStartE2EDuration="3.890883772s" podCreationTimestamp="2025-01-13 21:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:28.88970896 +0000 UTC m=+1.185248381" watchObservedRunningTime="2025-01-13 21:22:28.890883772 +0000 UTC m=+1.186423194" Jan 13 21:22:28.902100 kubelet[2494]: I0113 21:22:28.901750 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.901728963 podStartE2EDuration="1.901728963s" podCreationTimestamp="2025-01-13 21:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:28.901667081 +0000 UTC m=+1.197206502" watchObservedRunningTime="2025-01-13 21:22:28.901728963 +0000 UTC m=+1.197268385" Jan 13 21:22:29.823563 kubelet[2494]: E0113 21:22:29.823501 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:29.824054 kubelet[2494]: E0113 21:22:29.823595 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:29.824054 kubelet[2494]: E0113 21:22:29.823638 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:31.110430 kubelet[2494]: E0113 21:22:31.110330 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:31.591248 kubelet[2494]: E0113 21:22:31.591117 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:32.086345 kubelet[2494]: I0113 21:22:32.086212 2494 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:22:32.086617 containerd[1470]: time="2025-01-13T21:22:32.086580355Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:22:32.086979 kubelet[2494]: I0113 21:22:32.086797 2494 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:22:33.580017 kubelet[2494]: I0113 21:22:33.579927 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.579902923 podStartE2EDuration="6.579902923s" podCreationTimestamp="2025-01-13 21:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:28.911522936 +0000 UTC m=+1.207062357" watchObservedRunningTime="2025-01-13 21:22:33.579902923 +0000 UTC m=+5.875442344" Jan 13 21:22:33.592222 systemd[1]: Created slice kubepods-besteffort-pod93781a09_6199_447d_8ec6_4e9ae5a5c3da.slice - libcontainer container kubepods-besteffort-pod93781a09_6199_447d_8ec6_4e9ae5a5c3da.slice. Jan 13 21:22:33.624374 sudo[1643]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:33.627832 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:33.630731 kubelet[2494]: I0113 21:22:33.630685 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93781a09-6199-447d-8ec6-4e9ae5a5c3da-kube-proxy\") pod \"kube-proxy-wzrkx\" (UID: \"93781a09-6199-447d-8ec6-4e9ae5a5c3da\") " pod="kube-system/kube-proxy-wzrkx" Jan 13 21:22:33.630731 kubelet[2494]: I0113 21:22:33.630722 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93781a09-6199-447d-8ec6-4e9ae5a5c3da-xtables-lock\") pod \"kube-proxy-wzrkx\" (UID: \"93781a09-6199-447d-8ec6-4e9ae5a5c3da\") " pod="kube-system/kube-proxy-wzrkx" Jan 13 21:22:33.630870 kubelet[2494]: I0113 21:22:33.630737 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93781a09-6199-447d-8ec6-4e9ae5a5c3da-lib-modules\") pod \"kube-proxy-wzrkx\" (UID: \"93781a09-6199-447d-8ec6-4e9ae5a5c3da\") " pod="kube-system/kube-proxy-wzrkx" Jan 13 21:22:33.630870 kubelet[2494]: I0113 21:22:33.630762 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92qk\" (UniqueName: \"kubernetes.io/projected/93781a09-6199-447d-8ec6-4e9ae5a5c3da-kube-api-access-q92qk\") pod \"kube-proxy-wzrkx\" (UID: \"93781a09-6199-447d-8ec6-4e9ae5a5c3da\") " pod="kube-system/kube-proxy-wzrkx" Jan 13 21:22:33.635221 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:33398.service: Deactivated successfully. Jan 13 21:22:33.637315 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:22:33.637504 systemd[1]: session-7.scope: Consumed 5.353s CPU time, 159.4M memory peak, 0B memory swap peak. Jan 13 21:22:33.639763 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:22:33.640733 systemd[1]: Created slice kubepods-besteffort-podfc794853_cd0c_4458_9504_bd9c3bb9ec06.slice - libcontainer container kubepods-besteffort-podfc794853_cd0c_4458_9504_bd9c3bb9ec06.slice. Jan 13 21:22:33.641660 systemd-logind[1451]: Removed session 7. Jan 13 21:22:33.731249 kubelet[2494]: I0113 21:22:33.731176 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc794853-cd0c-4458-9504-bd9c3bb9ec06-var-lib-calico\") pod \"tigera-operator-76c4976dd7-r9tkt\" (UID: \"fc794853-cd0c-4458-9504-bd9c3bb9ec06\") " pod="tigera-operator/tigera-operator-76c4976dd7-r9tkt" Jan 13 21:22:33.731249 kubelet[2494]: I0113 21:22:33.731237 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgd6r\" (UniqueName: \"kubernetes.io/projected/fc794853-cd0c-4458-9504-bd9c3bb9ec06-kube-api-access-jgd6r\") pod \"tigera-operator-76c4976dd7-r9tkt\" (UID: \"fc794853-cd0c-4458-9504-bd9c3bb9ec06\") " pod="tigera-operator/tigera-operator-76c4976dd7-r9tkt" Jan 13 21:22:33.906634 kubelet[2494]: E0113 21:22:33.906471 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:33.907326 containerd[1470]: time="2025-01-13T21:22:33.907248799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzrkx,Uid:93781a09-6199-447d-8ec6-4e9ae5a5c3da,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:33.933549 containerd[1470]: time="2025-01-13T21:22:33.932783680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:33.933549 containerd[1470]: time="2025-01-13T21:22:33.933516167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:33.933915 containerd[1470]: time="2025-01-13T21:22:33.933530083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:33.933915 containerd[1470]: time="2025-01-13T21:22:33.933628383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:33.944243 containerd[1470]: time="2025-01-13T21:22:33.944123721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r9tkt,Uid:fc794853-cd0c-4458-9504-bd9c3bb9ec06,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:22:33.954440 systemd[1]: Started cri-containerd-3a6d46e9f8a9cc0c907041e8d748a706a22f0dc467e3d32c8cc1ce2b4196f5cf.scope - libcontainer container 3a6d46e9f8a9cc0c907041e8d748a706a22f0dc467e3d32c8cc1ce2b4196f5cf. Jan 13 21:22:33.975366 containerd[1470]: time="2025-01-13T21:22:33.975185883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:33.975366 containerd[1470]: time="2025-01-13T21:22:33.975275055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:33.975366 containerd[1470]: time="2025-01-13T21:22:33.975325237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:33.976102 containerd[1470]: time="2025-01-13T21:22:33.976037046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:33.980991 containerd[1470]: time="2025-01-13T21:22:33.980921743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzrkx,Uid:93781a09-6199-447d-8ec6-4e9ae5a5c3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a6d46e9f8a9cc0c907041e8d748a706a22f0dc467e3d32c8cc1ce2b4196f5cf\"" Jan 13 21:22:33.983631 kubelet[2494]: E0113 21:22:33.983589 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:33.986956 containerd[1470]: time="2025-01-13T21:22:33.986918590Z" level=info msg="CreateContainer within sandbox \"3a6d46e9f8a9cc0c907041e8d748a706a22f0dc467e3d32c8cc1ce2b4196f5cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:22:33.999418 systemd[1]: Started cri-containerd-aee12e9e5cd086a34b2ad53af8939ba26f0bc9240f170e4dbebe4fa808097397.scope - libcontainer container aee12e9e5cd086a34b2ad53af8939ba26f0bc9240f170e4dbebe4fa808097397. Jan 13 21:22:34.011710 containerd[1470]: time="2025-01-13T21:22:34.011667542Z" level=info msg="CreateContainer within sandbox \"3a6d46e9f8a9cc0c907041e8d748a706a22f0dc467e3d32c8cc1ce2b4196f5cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"946b8071f4385af50384c30ea8b96d3b14fd200ac7cb36fa12cd22841a90abf4\"" Jan 13 21:22:34.013256 containerd[1470]: time="2025-01-13T21:22:34.013212698Z" level=info msg="StartContainer for \"946b8071f4385af50384c30ea8b96d3b14fd200ac7cb36fa12cd22841a90abf4\"" Jan 13 21:22:34.044471 containerd[1470]: time="2025-01-13T21:22:34.044423278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r9tkt,Uid:fc794853-cd0c-4458-9504-bd9c3bb9ec06,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aee12e9e5cd086a34b2ad53af8939ba26f0bc9240f170e4dbebe4fa808097397\"" Jan 13 21:22:34.046742 containerd[1470]: time="2025-01-13T21:22:34.046647846Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:22:34.060597 systemd[1]: Started cri-containerd-946b8071f4385af50384c30ea8b96d3b14fd200ac7cb36fa12cd22841a90abf4.scope - libcontainer container 946b8071f4385af50384c30ea8b96d3b14fd200ac7cb36fa12cd22841a90abf4. Jan 13 21:22:34.097332 containerd[1470]: time="2025-01-13T21:22:34.097250585Z" level=info msg="StartContainer for \"946b8071f4385af50384c30ea8b96d3b14fd200ac7cb36fa12cd22841a90abf4\" returns successfully" Jan 13 21:22:34.550646 update_engine[1454]: I20250113 21:22:34.550511 1454 update_attempter.cc:509] Updating boot flags... Jan 13 21:22:34.577324 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2840) Jan 13 21:22:34.610319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2842) Jan 13 21:22:34.649378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2842) Jan 13 21:22:34.832107 kubelet[2494]: E0113 21:22:34.831973 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:34.840425 kubelet[2494]: I0113 21:22:34.840363 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzrkx" podStartSLOduration=1.840340178 podStartE2EDuration="1.840340178s" podCreationTimestamp="2025-01-13 21:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:34.840096323 +0000 UTC m=+7.135635744" watchObservedRunningTime="2025-01-13 21:22:34.840340178 +0000 UTC m=+7.135879599" Jan 13 21:22:35.239292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452107803.mount: Deactivated successfully. Jan 13 21:22:35.559375 containerd[1470]: time="2025-01-13T21:22:35.559245603Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:35.560102 containerd[1470]: time="2025-01-13T21:22:35.560041341Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764281" Jan 13 21:22:35.561467 containerd[1470]: time="2025-01-13T21:22:35.561435585Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:35.564188 containerd[1470]: time="2025-01-13T21:22:35.564143775Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:35.565301 containerd[1470]: time="2025-01-13T21:22:35.565242547Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.518562853s" Jan 13 21:22:35.565334 containerd[1470]: time="2025-01-13T21:22:35.565315291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:22:35.567767 containerd[1470]: time="2025-01-13T21:22:35.567358052Z" level=info msg="CreateContainer within sandbox \"aee12e9e5cd086a34b2ad53af8939ba26f0bc9240f170e4dbebe4fa808097397\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:22:35.578734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418710262.mount: Deactivated successfully. Jan 13 21:22:35.580785 containerd[1470]: time="2025-01-13T21:22:35.580723590Z" level=info msg="CreateContainer within sandbox \"aee12e9e5cd086a34b2ad53af8939ba26f0bc9240f170e4dbebe4fa808097397\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a2db09ba8eed03ff265e19dc674a21875753b542c591522fc16370a3c41f4ae0\"" Jan 13 21:22:35.581235 containerd[1470]: time="2025-01-13T21:22:35.581211073Z" level=info msg="StartContainer for \"a2db09ba8eed03ff265e19dc674a21875753b542c591522fc16370a3c41f4ae0\"" Jan 13 21:22:35.614511 systemd[1]: Started cri-containerd-a2db09ba8eed03ff265e19dc674a21875753b542c591522fc16370a3c41f4ae0.scope - libcontainer container a2db09ba8eed03ff265e19dc674a21875753b542c591522fc16370a3c41f4ae0. Jan 13 21:22:35.921767 containerd[1470]: time="2025-01-13T21:22:35.921645229Z" level=info msg="StartContainer for \"a2db09ba8eed03ff265e19dc674a21875753b542c591522fc16370a3c41f4ae0\" returns successfully" Jan 13 21:22:36.494894 kubelet[2494]: E0113 21:22:36.494849 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:36.507463 kubelet[2494]: I0113 21:22:36.507409 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-r9tkt" podStartSLOduration=1.987296836 podStartE2EDuration="3.507396774s" podCreationTimestamp="2025-01-13 21:22:33 +0000 UTC" firstStartedPulling="2025-01-13 21:22:34.046100013 +0000 UTC m=+6.341639444" lastFinishedPulling="2025-01-13 21:22:35.566199961 +0000 UTC m=+7.861739382" observedRunningTime="2025-01-13 21:22:35.933466589 +0000 UTC m=+8.229006010" watchObservedRunningTime="2025-01-13 21:22:36.507396774 +0000 UTC m=+8.802936195" Jan 13 21:22:36.925680 kubelet[2494]: E0113 21:22:36.925614 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:38.668832 systemd[1]: Created slice kubepods-besteffort-pod40712e98_8e24_425d_86b6_ff2050ca24e4.slice - libcontainer container kubepods-besteffort-pod40712e98_8e24_425d_86b6_ff2050ca24e4.slice. Jan 13 21:22:38.716739 systemd[1]: Created slice kubepods-besteffort-podf5433755_69a7_4e58_83b7_b41eee34a30c.slice - libcontainer container kubepods-besteffort-podf5433755_69a7_4e58_83b7_b41eee34a30c.slice. Jan 13 21:22:38.763059 kubelet[2494]: I0113 21:22:38.763002 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-cni-bin-dir\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763059 kubelet[2494]: I0113 21:22:38.763053 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-xtables-lock\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763059 kubelet[2494]: I0113 21:22:38.763072 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-var-lib-calico\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763607 kubelet[2494]: I0113 21:22:38.763089 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-flexvol-driver-host\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763607 kubelet[2494]: I0113 21:22:38.763215 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40712e98-8e24-425d-86b6-ff2050ca24e4-tigera-ca-bundle\") pod \"calico-typha-54f455887-lqpd8\" (UID: \"40712e98-8e24-425d-86b6-ff2050ca24e4\") " pod="calico-system/calico-typha-54f455887-lqpd8" Jan 13 21:22:38.763607 kubelet[2494]: I0113 21:22:38.763272 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-var-run-calico\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763607 kubelet[2494]: I0113 21:22:38.763304 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5433755-69a7-4e58-83b7-b41eee34a30c-tigera-ca-bundle\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763607 kubelet[2494]: I0113 21:22:38.763318 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/40712e98-8e24-425d-86b6-ff2050ca24e4-typha-certs\") pod \"calico-typha-54f455887-lqpd8\" (UID: \"40712e98-8e24-425d-86b6-ff2050ca24e4\") " pod="calico-system/calico-typha-54f455887-lqpd8" Jan 13 21:22:38.763723 kubelet[2494]: I0113 21:22:38.763334 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99wkv\" (UniqueName: \"kubernetes.io/projected/40712e98-8e24-425d-86b6-ff2050ca24e4-kube-api-access-99wkv\") pod \"calico-typha-54f455887-lqpd8\" (UID: \"40712e98-8e24-425d-86b6-ff2050ca24e4\") " pod="calico-system/calico-typha-54f455887-lqpd8" Jan 13 21:22:38.763723 kubelet[2494]: I0113 21:22:38.763349 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-policysync\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763723 kubelet[2494]: I0113 21:22:38.763363 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6h2n\" (UniqueName: \"kubernetes.io/projected/f5433755-69a7-4e58-83b7-b41eee34a30c-kube-api-access-p6h2n\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763723 kubelet[2494]: I0113 21:22:38.763376 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-lib-modules\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763723 kubelet[2494]: I0113 21:22:38.763389 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5433755-69a7-4e58-83b7-b41eee34a30c-node-certs\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763841 kubelet[2494]: I0113 21:22:38.763401 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-cni-net-dir\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.763841 kubelet[2494]: I0113 21:22:38.763416 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5433755-69a7-4e58-83b7-b41eee34a30c-cni-log-dir\") pod \"calico-node-6j7nt\" (UID: \"f5433755-69a7-4e58-83b7-b41eee34a30c\") " pod="calico-system/calico-node-6j7nt" Jan 13 21:22:38.812171 kubelet[2494]: E0113 21:22:38.812099 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:38.863849 kubelet[2494]: I0113 21:22:38.863788 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48bs\" (UniqueName: \"kubernetes.io/projected/bea69c22-42f9-473c-8e07-d63b3f3fd2a2-kube-api-access-j48bs\") pod \"csi-node-driver-r48mv\" (UID: \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\") " pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:38.864054 kubelet[2494]: I0113 21:22:38.863901 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bea69c22-42f9-473c-8e07-d63b3f3fd2a2-socket-dir\") pod \"csi-node-driver-r48mv\" (UID: \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\") " pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:38.864054 kubelet[2494]: I0113 21:22:38.863928 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bea69c22-42f9-473c-8e07-d63b3f3fd2a2-varrun\") pod \"csi-node-driver-r48mv\" (UID: \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\") " pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:38.864054 kubelet[2494]: I0113 21:22:38.863952 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bea69c22-42f9-473c-8e07-d63b3f3fd2a2-kubelet-dir\") pod \"csi-node-driver-r48mv\" (UID: \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\") " pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:38.864054 kubelet[2494]: I0113 21:22:38.863990 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bea69c22-42f9-473c-8e07-d63b3f3fd2a2-registration-dir\") pod \"csi-node-driver-r48mv\" (UID: \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\") " pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:38.865785 kubelet[2494]: E0113 21:22:38.865671 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.865785 kubelet[2494]: W0113 21:22:38.865714 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.865785 kubelet[2494]: E0113 21:22:38.865743 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.869387 kubelet[2494]: E0113 21:22:38.869311 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.869387 kubelet[2494]: W0113 21:22:38.869336 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.869387 kubelet[2494]: E0113 21:22:38.869358 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.873542 kubelet[2494]: E0113 21:22:38.873435 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.873542 kubelet[2494]: W0113 21:22:38.873452 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.873542 kubelet[2494]: E0113 21:22:38.873471 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.873798 kubelet[2494]: E0113 21:22:38.873705 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.873798 kubelet[2494]: W0113 21:22:38.873717 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.873798 kubelet[2494]: E0113 21:22:38.873726 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.874353 kubelet[2494]: E0113 21:22:38.874337 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.874409 kubelet[2494]: W0113 21:22:38.874398 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.874458 kubelet[2494]: E0113 21:22:38.874448 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.965217 kubelet[2494]: E0113 21:22:38.965126 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.965217 kubelet[2494]: W0113 21:22:38.965153 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.965217 kubelet[2494]: E0113 21:22:38.965187 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.965527 kubelet[2494]: E0113 21:22:38.965504 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.965527 kubelet[2494]: W0113 21:22:38.965526 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.965592 kubelet[2494]: E0113 21:22:38.965546 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.965888 kubelet[2494]: E0113 21:22:38.965870 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.965888 kubelet[2494]: W0113 21:22:38.965882 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.965952 kubelet[2494]: E0113 21:22:38.965912 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.966272 kubelet[2494]: E0113 21:22:38.966246 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.966272 kubelet[2494]: W0113 21:22:38.966259 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.966272 kubelet[2494]: E0113 21:22:38.966274 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.966586 kubelet[2494]: E0113 21:22:38.966568 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.966586 kubelet[2494]: W0113 21:22:38.966584 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.966651 kubelet[2494]: E0113 21:22:38.966606 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.966873 kubelet[2494]: E0113 21:22:38.966846 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.966873 kubelet[2494]: W0113 21:22:38.966862 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.966873 kubelet[2494]: E0113 21:22:38.966881 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.967235 kubelet[2494]: E0113 21:22:38.967218 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.967235 kubelet[2494]: W0113 21:22:38.967229 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.967360 kubelet[2494]: E0113 21:22:38.967327 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.967453 kubelet[2494]: E0113 21:22:38.967438 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.967453 kubelet[2494]: W0113 21:22:38.967449 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.967515 kubelet[2494]: E0113 21:22:38.967491 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.967670 kubelet[2494]: E0113 21:22:38.967655 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.967670 kubelet[2494]: W0113 21:22:38.967666 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.967725 kubelet[2494]: E0113 21:22:38.967698 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.967890 kubelet[2494]: E0113 21:22:38.967872 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.967890 kubelet[2494]: W0113 21:22:38.967882 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.967943 kubelet[2494]: E0113 21:22:38.967914 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.968084 kubelet[2494]: E0113 21:22:38.968069 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.968084 kubelet[2494]: W0113 21:22:38.968079 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.968135 kubelet[2494]: E0113 21:22:38.968110 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.968377 kubelet[2494]: E0113 21:22:38.968360 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.968377 kubelet[2494]: W0113 21:22:38.968371 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.968442 kubelet[2494]: E0113 21:22:38.968385 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.968626 kubelet[2494]: E0113 21:22:38.968600 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.968626 kubelet[2494]: W0113 21:22:38.968613 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.968626 kubelet[2494]: E0113 21:22:38.968627 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.968823 kubelet[2494]: E0113 21:22:38.968811 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.968823 kubelet[2494]: W0113 21:22:38.968819 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.968894 kubelet[2494]: E0113 21:22:38.968834 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.969376 kubelet[2494]: E0113 21:22:38.969067 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.969376 kubelet[2494]: W0113 21:22:38.969086 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.969376 kubelet[2494]: E0113 21:22:38.969113 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.969496 kubelet[2494]: E0113 21:22:38.969430 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.969496 kubelet[2494]: W0113 21:22:38.969440 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.969557 kubelet[2494]: E0113 21:22:38.969543 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.969737 kubelet[2494]: E0113 21:22:38.969718 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.969798 kubelet[2494]: W0113 21:22:38.969731 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.969798 kubelet[2494]: E0113 21:22:38.969776 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.969977 kubelet[2494]: E0113 21:22:38.969935 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.969977 kubelet[2494]: W0113 21:22:38.969950 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.970207 kubelet[2494]: E0113 21:22:38.970185 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.970207 kubelet[2494]: W0113 21:22:38.970201 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.970346 kubelet[2494]: E0113 21:22:38.970275 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.970378 kubelet[2494]: E0113 21:22:38.970347 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.970603 kubelet[2494]: E0113 21:22:38.970586 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.970603 kubelet[2494]: W0113 21:22:38.970598 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.970658 kubelet[2494]: E0113 21:22:38.970613 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.970837 kubelet[2494]: E0113 21:22:38.970821 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.970837 kubelet[2494]: W0113 21:22:38.970833 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.970937 kubelet[2494]: E0113 21:22:38.970843 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.971176 kubelet[2494]: E0113 21:22:38.971153 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.971176 kubelet[2494]: W0113 21:22:38.971175 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.971243 kubelet[2494]: E0113 21:22:38.971191 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.971502 kubelet[2494]: E0113 21:22:38.971483 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.971502 kubelet[2494]: W0113 21:22:38.971497 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.971597 kubelet[2494]: E0113 21:22:38.971515 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.971786 kubelet[2494]: E0113 21:22:38.971770 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.971786 kubelet[2494]: W0113 21:22:38.971783 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.971839 kubelet[2494]: E0113 21:22:38.971798 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.972078 kubelet[2494]: E0113 21:22:38.972057 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.972078 kubelet[2494]: W0113 21:22:38.972070 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.972132 kubelet[2494]: E0113 21:22:38.972080 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:38.976380 kubelet[2494]: E0113 21:22:38.976116 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:38.976708 containerd[1470]: time="2025-01-13T21:22:38.976675239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f455887-lqpd8,Uid:40712e98-8e24-425d-86b6-ff2050ca24e4,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:38.977459 kubelet[2494]: E0113 21:22:38.976938 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:38.977459 kubelet[2494]: W0113 21:22:38.976947 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:38.977459 kubelet[2494]: E0113 21:22:38.976956 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:39.003214 containerd[1470]: time="2025-01-13T21:22:39.002528301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:39.003507 containerd[1470]: time="2025-01-13T21:22:39.003267081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:39.003507 containerd[1470]: time="2025-01-13T21:22:39.003313107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:39.003507 containerd[1470]: time="2025-01-13T21:22:39.003430332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:39.019730 kubelet[2494]: E0113 21:22:39.019595 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:39.020187 containerd[1470]: time="2025-01-13T21:22:39.020125836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6j7nt,Uid:f5433755-69a7-4e58-83b7-b41eee34a30c,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:39.024435 systemd[1]: Started cri-containerd-751552e1419e84351c8ef40aee68f588d835abfc640916bfcc472811194493e6.scope - libcontainer container 751552e1419e84351c8ef40aee68f588d835abfc640916bfcc472811194493e6. Jan 13 21:22:39.052503 containerd[1470]: time="2025-01-13T21:22:39.052358511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:39.052503 containerd[1470]: time="2025-01-13T21:22:39.052467181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:39.052503 containerd[1470]: time="2025-01-13T21:22:39.052485565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:39.053324 containerd[1470]: time="2025-01-13T21:22:39.052583826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:39.068931 containerd[1470]: time="2025-01-13T21:22:39.068778438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f455887-lqpd8,Uid:40712e98-8e24-425d-86b6-ff2050ca24e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"751552e1419e84351c8ef40aee68f588d835abfc640916bfcc472811194493e6\"" Jan 13 21:22:39.069862 kubelet[2494]: E0113 21:22:39.069819 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:39.071179 containerd[1470]: time="2025-01-13T21:22:39.071133164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:22:39.072433 systemd[1]: Started cri-containerd-b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4.scope - libcontainer container b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4. Jan 13 21:22:39.097606 containerd[1470]: time="2025-01-13T21:22:39.097560078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6j7nt,Uid:f5433755-69a7-4e58-83b7-b41eee34a30c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\"" Jan 13 21:22:39.098381 kubelet[2494]: E0113 21:22:39.098343 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:40.452690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973809171.mount: Deactivated successfully. Jan 13 21:22:40.728174 containerd[1470]: time="2025-01-13T21:22:40.728048592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:40.729136 containerd[1470]: time="2025-01-13T21:22:40.728869726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 21:22:40.730274 containerd[1470]: time="2025-01-13T21:22:40.730232208Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:40.732436 containerd[1470]: time="2025-01-13T21:22:40.732365491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:40.733158 containerd[1470]: time="2025-01-13T21:22:40.733118409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.661954549s" Jan 13 21:22:40.733190 containerd[1470]: time="2025-01-13T21:22:40.733155798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:22:40.734299 containerd[1470]: time="2025-01-13T21:22:40.734109807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:22:40.741386 containerd[1470]: time="2025-01-13T21:22:40.741346639Z" level=info msg="CreateContainer within sandbox \"751552e1419e84351c8ef40aee68f588d835abfc640916bfcc472811194493e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:22:40.753534 containerd[1470]: time="2025-01-13T21:22:40.753499203Z" level=info msg="CreateContainer within sandbox \"751552e1419e84351c8ef40aee68f588d835abfc640916bfcc472811194493e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a67b0a7082df7bfcde773990c6e16b666c718083ac106ea5009951977195040d\"" Jan 13 21:22:40.753872 containerd[1470]: time="2025-01-13T21:22:40.753831946Z" level=info msg="StartContainer for \"a67b0a7082df7bfcde773990c6e16b666c718083ac106ea5009951977195040d\"" Jan 13 21:22:40.782431 systemd[1]: Started cri-containerd-a67b0a7082df7bfcde773990c6e16b666c718083ac106ea5009951977195040d.scope - libcontainer container a67b0a7082df7bfcde773990c6e16b666c718083ac106ea5009951977195040d. Jan 13 21:22:40.807328 kubelet[2494]: E0113 21:22:40.807223 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:40.823779 containerd[1470]: time="2025-01-13T21:22:40.823741209Z" level=info msg="StartContainer for \"a67b0a7082df7bfcde773990c6e16b666c718083ac106ea5009951977195040d\" returns successfully" Jan 13 21:22:40.934960 kubelet[2494]: E0113 21:22:40.934933 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:40.943704 kubelet[2494]: I0113 21:22:40.943642 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54f455887-lqpd8" podStartSLOduration=1.280482666 podStartE2EDuration="2.943613122s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="2025-01-13 21:22:39.070858508 +0000 UTC m=+11.366397929" lastFinishedPulling="2025-01-13 21:22:40.733988964 +0000 UTC m=+13.029528385" observedRunningTime="2025-01-13 21:22:40.943481991 +0000 UTC m=+13.239021412" watchObservedRunningTime="2025-01-13 21:22:40.943613122 +0000 UTC m=+13.239152543" Jan 13 21:22:40.975562 kubelet[2494]: E0113 21:22:40.975527 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.975562 kubelet[2494]: W0113 21:22:40.975557 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.975713 kubelet[2494]: E0113 21:22:40.975582 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.975800 kubelet[2494]: E0113 21:22:40.975780 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.975831 kubelet[2494]: W0113 21:22:40.975803 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.975831 kubelet[2494]: E0113 21:22:40.975812 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.976106 kubelet[2494]: E0113 21:22:40.976080 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.976106 kubelet[2494]: W0113 21:22:40.976092 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.976106 kubelet[2494]: E0113 21:22:40.976100 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.976358 kubelet[2494]: E0113 21:22:40.976343 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.976358 kubelet[2494]: W0113 21:22:40.976355 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.976417 kubelet[2494]: E0113 21:22:40.976366 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.976610 kubelet[2494]: E0113 21:22:40.976585 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.976610 kubelet[2494]: W0113 21:22:40.976601 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.976660 kubelet[2494]: E0113 21:22:40.976612 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.976834 kubelet[2494]: E0113 21:22:40.976817 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.976834 kubelet[2494]: W0113 21:22:40.976831 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.976888 kubelet[2494]: E0113 21:22:40.976842 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.977064 kubelet[2494]: E0113 21:22:40.977039 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.977064 kubelet[2494]: W0113 21:22:40.977062 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.977107 kubelet[2494]: E0113 21:22:40.977076 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.977333 kubelet[2494]: E0113 21:22:40.977315 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.977368 kubelet[2494]: W0113 21:22:40.977332 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.977368 kubelet[2494]: E0113 21:22:40.977346 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.977608 kubelet[2494]: E0113 21:22:40.977583 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.977608 kubelet[2494]: W0113 21:22:40.977599 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.977654 kubelet[2494]: E0113 21:22:40.977610 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.977841 kubelet[2494]: E0113 21:22:40.977824 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.977872 kubelet[2494]: W0113 21:22:40.977839 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.977872 kubelet[2494]: E0113 21:22:40.977853 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.978108 kubelet[2494]: E0113 21:22:40.978080 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.978108 kubelet[2494]: W0113 21:22:40.978102 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.978267 kubelet[2494]: E0113 21:22:40.978114 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.978711 kubelet[2494]: E0113 21:22:40.978676 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.978711 kubelet[2494]: W0113 21:22:40.978700 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.978711 kubelet[2494]: E0113 21:22:40.978714 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.979265 kubelet[2494]: E0113 21:22:40.979095 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.979265 kubelet[2494]: W0113 21:22:40.979116 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.979265 kubelet[2494]: E0113 21:22:40.979137 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.979639 kubelet[2494]: E0113 21:22:40.979490 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.979639 kubelet[2494]: W0113 21:22:40.979503 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.979639 kubelet[2494]: E0113 21:22:40.979515 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.979798 kubelet[2494]: E0113 21:22:40.979782 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.979798 kubelet[2494]: W0113 21:22:40.979792 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.979846 kubelet[2494]: E0113 21:22:40.979803 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.983032 kubelet[2494]: E0113 21:22:40.983017 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.983032 kubelet[2494]: W0113 21:22:40.983029 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.983116 kubelet[2494]: E0113 21:22:40.983040 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.983323 kubelet[2494]: E0113 21:22:40.983308 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.983323 kubelet[2494]: W0113 21:22:40.983319 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.983378 kubelet[2494]: E0113 21:22:40.983334 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.983617 kubelet[2494]: E0113 21:22:40.983597 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.983617 kubelet[2494]: W0113 21:22:40.983614 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.983680 kubelet[2494]: E0113 21:22:40.983632 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.983863 kubelet[2494]: E0113 21:22:40.983849 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.983863 kubelet[2494]: W0113 21:22:40.983860 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.983915 kubelet[2494]: E0113 21:22:40.983874 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.984089 kubelet[2494]: E0113 21:22:40.984068 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.984089 kubelet[2494]: W0113 21:22:40.984079 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.984141 kubelet[2494]: E0113 21:22:40.984090 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.984366 kubelet[2494]: E0113 21:22:40.984346 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.984366 kubelet[2494]: W0113 21:22:40.984358 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.984423 kubelet[2494]: E0113 21:22:40.984371 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.984662 kubelet[2494]: E0113 21:22:40.984645 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.984662 kubelet[2494]: W0113 21:22:40.984658 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.984726 kubelet[2494]: E0113 21:22:40.984672 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.984876 kubelet[2494]: E0113 21:22:40.984861 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.984876 kubelet[2494]: W0113 21:22:40.984872 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.984921 kubelet[2494]: E0113 21:22:40.984886 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.985198 kubelet[2494]: E0113 21:22:40.985172 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.985236 kubelet[2494]: W0113 21:22:40.985197 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.985236 kubelet[2494]: E0113 21:22:40.985228 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.985487 kubelet[2494]: E0113 21:22:40.985473 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.985487 kubelet[2494]: W0113 21:22:40.985483 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.985548 kubelet[2494]: E0113 21:22:40.985497 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.985724 kubelet[2494]: E0113 21:22:40.985708 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.985724 kubelet[2494]: W0113 21:22:40.985720 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.985776 kubelet[2494]: E0113 21:22:40.985734 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.985928 kubelet[2494]: E0113 21:22:40.985914 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.985928 kubelet[2494]: W0113 21:22:40.985924 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.985977 kubelet[2494]: E0113 21:22:40.985937 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.986160 kubelet[2494]: E0113 21:22:40.986144 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.986160 kubelet[2494]: W0113 21:22:40.986156 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.986219 kubelet[2494]: E0113 21:22:40.986173 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.986426 kubelet[2494]: E0113 21:22:40.986410 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.986426 kubelet[2494]: W0113 21:22:40.986423 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.986483 kubelet[2494]: E0113 21:22:40.986438 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.986654 kubelet[2494]: E0113 21:22:40.986630 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.986654 kubelet[2494]: W0113 21:22:40.986642 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.986654 kubelet[2494]: E0113 21:22:40.986655 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.986868 kubelet[2494]: E0113 21:22:40.986852 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.986868 kubelet[2494]: W0113 21:22:40.986865 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.986916 kubelet[2494]: E0113 21:22:40.986882 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.987130 kubelet[2494]: E0113 21:22:40.987107 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.987130 kubelet[2494]: W0113 21:22:40.987119 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.987130 kubelet[2494]: E0113 21:22:40.987127 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:40.987616 kubelet[2494]: E0113 21:22:40.987594 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:40.987616 kubelet[2494]: W0113 21:22:40.987606 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:40.987616 kubelet[2494]: E0113 21:22:40.987615 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.113548 kubelet[2494]: E0113 21:22:41.113521 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:41.181437 kubelet[2494]: E0113 21:22:41.181395 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.181437 kubelet[2494]: W0113 21:22:41.181419 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.181437 kubelet[2494]: E0113 21:22:41.181444 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.181647 kubelet[2494]: E0113 21:22:41.181630 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.181647 kubelet[2494]: W0113 21:22:41.181641 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.181691 kubelet[2494]: E0113 21:22:41.181649 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.181871 kubelet[2494]: E0113 21:22:41.181844 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.181871 kubelet[2494]: W0113 21:22:41.181855 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.181871 kubelet[2494]: E0113 21:22:41.181863 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.182078 kubelet[2494]: E0113 21:22:41.182051 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.182078 kubelet[2494]: W0113 21:22:41.182062 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.182078 kubelet[2494]: E0113 21:22:41.182069 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.182271 kubelet[2494]: E0113 21:22:41.182247 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.182271 kubelet[2494]: W0113 21:22:41.182259 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.182271 kubelet[2494]: E0113 21:22:41.182267 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.182503 kubelet[2494]: E0113 21:22:41.182487 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.182503 kubelet[2494]: W0113 21:22:41.182498 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.182558 kubelet[2494]: E0113 21:22:41.182505 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.182698 kubelet[2494]: E0113 21:22:41.182683 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.182698 kubelet[2494]: W0113 21:22:41.182692 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.182751 kubelet[2494]: E0113 21:22:41.182699 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.182897 kubelet[2494]: E0113 21:22:41.182881 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.182897 kubelet[2494]: W0113 21:22:41.182892 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.182954 kubelet[2494]: E0113 21:22:41.182900 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.183104 kubelet[2494]: E0113 21:22:41.183084 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.183104 kubelet[2494]: W0113 21:22:41.183094 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.183104 kubelet[2494]: E0113 21:22:41.183101 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.183372 kubelet[2494]: E0113 21:22:41.183319 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.183372 kubelet[2494]: W0113 21:22:41.183326 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.183372 kubelet[2494]: E0113 21:22:41.183335 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.183526 kubelet[2494]: E0113 21:22:41.183511 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.183526 kubelet[2494]: W0113 21:22:41.183521 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.183567 kubelet[2494]: E0113 21:22:41.183528 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.183720 kubelet[2494]: E0113 21:22:41.183706 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.183720 kubelet[2494]: W0113 21:22:41.183716 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.183787 kubelet[2494]: E0113 21:22:41.183724 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.183938 kubelet[2494]: E0113 21:22:41.183908 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.183938 kubelet[2494]: W0113 21:22:41.183918 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.183938 kubelet[2494]: E0113 21:22:41.183925 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.184133 kubelet[2494]: E0113 21:22:41.184115 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.184133 kubelet[2494]: W0113 21:22:41.184125 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.184133 kubelet[2494]: E0113 21:22:41.184132 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.184343 kubelet[2494]: E0113 21:22:41.184325 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.184343 kubelet[2494]: W0113 21:22:41.184335 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.184343 kubelet[2494]: E0113 21:22:41.184342 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.184530 kubelet[2494]: E0113 21:22:41.184512 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.184530 kubelet[2494]: W0113 21:22:41.184522 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.184530 kubelet[2494]: E0113 21:22:41.184529 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.184718 kubelet[2494]: E0113 21:22:41.184701 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.184718 kubelet[2494]: W0113 21:22:41.184710 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.184718 kubelet[2494]: E0113 21:22:41.184717 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.184899 kubelet[2494]: E0113 21:22:41.184883 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.184899 kubelet[2494]: W0113 21:22:41.184892 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.184899 kubelet[2494]: E0113 21:22:41.184899 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.185090 kubelet[2494]: E0113 21:22:41.185072 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.185090 kubelet[2494]: W0113 21:22:41.185082 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.185147 kubelet[2494]: E0113 21:22:41.185091 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.185298 kubelet[2494]: E0113 21:22:41.185260 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.185298 kubelet[2494]: W0113 21:22:41.185270 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.185298 kubelet[2494]: E0113 21:22:41.185277 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.185489 kubelet[2494]: E0113 21:22:41.185469 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.185489 kubelet[2494]: W0113 21:22:41.185479 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.185489 kubelet[2494]: E0113 21:22:41.185487 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.185680 kubelet[2494]: E0113 21:22:41.185657 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.185680 kubelet[2494]: W0113 21:22:41.185667 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.185680 kubelet[2494]: E0113 21:22:41.185675 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.185860 kubelet[2494]: E0113 21:22:41.185840 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.185860 kubelet[2494]: W0113 21:22:41.185850 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.185860 kubelet[2494]: E0113 21:22:41.185857 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.186053 kubelet[2494]: E0113 21:22:41.186025 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.186053 kubelet[2494]: W0113 21:22:41.186043 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.186053 kubelet[2494]: E0113 21:22:41.186050 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.186235 kubelet[2494]: E0113 21:22:41.186217 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.186235 kubelet[2494]: W0113 21:22:41.186227 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.186235 kubelet[2494]: E0113 21:22:41.186235 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.595273 kubelet[2494]: E0113 21:22:41.595241 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:41.689353 kubelet[2494]: E0113 21:22:41.689313 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.689353 kubelet[2494]: W0113 21:22:41.689333 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.689353 kubelet[2494]: E0113 21:22:41.689356 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.689629 kubelet[2494]: E0113 21:22:41.689604 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.689629 kubelet[2494]: W0113 21:22:41.689621 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.689703 kubelet[2494]: E0113 21:22:41.689642 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.689876 kubelet[2494]: E0113 21:22:41.689855 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.689876 kubelet[2494]: W0113 21:22:41.689867 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.689876 kubelet[2494]: E0113 21:22:41.689876 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.690154 kubelet[2494]: E0113 21:22:41.690123 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.690154 kubelet[2494]: W0113 21:22:41.690145 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.690154 kubelet[2494]: E0113 21:22:41.690154 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.690409 kubelet[2494]: E0113 21:22:41.690393 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.690409 kubelet[2494]: W0113 21:22:41.690404 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.690409 kubelet[2494]: E0113 21:22:41.690412 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.936369 kubelet[2494]: I0113 21:22:41.936250 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:41.936740 kubelet[2494]: E0113 21:22:41.936587 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:41.992605 kubelet[2494]: E0113 21:22:41.992572 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.992605 kubelet[2494]: W0113 21:22:41.992589 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.992605 kubelet[2494]: E0113 21:22:41.992604 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.992885 kubelet[2494]: E0113 21:22:41.992859 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.992885 kubelet[2494]: W0113 21:22:41.992871 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.992885 kubelet[2494]: E0113 21:22:41.992879 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.993085 kubelet[2494]: E0113 21:22:41.993066 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.993085 kubelet[2494]: W0113 21:22:41.993077 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.993085 kubelet[2494]: E0113 21:22:41.993084 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.993276 kubelet[2494]: E0113 21:22:41.993259 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.993276 kubelet[2494]: W0113 21:22:41.993269 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.993276 kubelet[2494]: E0113 21:22:41.993277 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.993517 kubelet[2494]: E0113 21:22:41.993492 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.993517 kubelet[2494]: W0113 21:22:41.993504 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.993517 kubelet[2494]: E0113 21:22:41.993512 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.993692 kubelet[2494]: E0113 21:22:41.993675 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.993692 kubelet[2494]: W0113 21:22:41.993684 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.993692 kubelet[2494]: E0113 21:22:41.993693 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.993875 kubelet[2494]: E0113 21:22:41.993857 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.993875 kubelet[2494]: W0113 21:22:41.993867 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.993875 kubelet[2494]: E0113 21:22:41.993874 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.994060 kubelet[2494]: E0113 21:22:41.994042 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.994060 kubelet[2494]: W0113 21:22:41.994052 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.994060 kubelet[2494]: E0113 21:22:41.994059 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.994334 kubelet[2494]: E0113 21:22:41.994318 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.994334 kubelet[2494]: W0113 21:22:41.994329 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.994394 kubelet[2494]: E0113 21:22:41.994337 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.994533 kubelet[2494]: E0113 21:22:41.994519 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.994533 kubelet[2494]: W0113 21:22:41.994529 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.994580 kubelet[2494]: E0113 21:22:41.994536 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.994714 kubelet[2494]: E0113 21:22:41.994701 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.994714 kubelet[2494]: W0113 21:22:41.994710 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.994763 kubelet[2494]: E0113 21:22:41.994717 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.994894 kubelet[2494]: E0113 21:22:41.994879 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.994894 kubelet[2494]: W0113 21:22:41.994890 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.994939 kubelet[2494]: E0113 21:22:41.994897 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.995096 kubelet[2494]: E0113 21:22:41.995081 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.995096 kubelet[2494]: W0113 21:22:41.995092 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.995142 kubelet[2494]: E0113 21:22:41.995099 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.995312 kubelet[2494]: E0113 21:22:41.995265 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.995312 kubelet[2494]: W0113 21:22:41.995276 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.995312 kubelet[2494]: E0113 21:22:41.995299 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.995472 kubelet[2494]: E0113 21:22:41.995458 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.995472 kubelet[2494]: W0113 21:22:41.995468 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.995514 kubelet[2494]: E0113 21:22:41.995475 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.995677 kubelet[2494]: E0113 21:22:41.995663 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.995677 kubelet[2494]: W0113 21:22:41.995673 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.995724 kubelet[2494]: E0113 21:22:41.995681 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.995914 kubelet[2494]: E0113 21:22:41.995899 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.995914 kubelet[2494]: W0113 21:22:41.995910 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.995959 kubelet[2494]: E0113 21:22:41.995925 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.996135 kubelet[2494]: E0113 21:22:41.996120 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.996135 kubelet[2494]: W0113 21:22:41.996131 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.996191 kubelet[2494]: E0113 21:22:41.996144 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.996365 kubelet[2494]: E0113 21:22:41.996351 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.996365 kubelet[2494]: W0113 21:22:41.996362 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.996414 kubelet[2494]: E0113 21:22:41.996375 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.996552 kubelet[2494]: E0113 21:22:41.996538 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.996552 kubelet[2494]: W0113 21:22:41.996548 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.996600 kubelet[2494]: E0113 21:22:41.996561 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.996722 kubelet[2494]: E0113 21:22:41.996709 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.996722 kubelet[2494]: W0113 21:22:41.996718 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.996775 kubelet[2494]: E0113 21:22:41.996730 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.997012 kubelet[2494]: E0113 21:22:41.996976 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.997041 kubelet[2494]: W0113 21:22:41.997011 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.997063 kubelet[2494]: E0113 21:22:41.997040 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.997248 kubelet[2494]: E0113 21:22:41.997233 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.997248 kubelet[2494]: W0113 21:22:41.997246 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.997322 kubelet[2494]: E0113 21:22:41.997277 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.997459 kubelet[2494]: E0113 21:22:41.997444 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.997459 kubelet[2494]: W0113 21:22:41.997454 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.997504 kubelet[2494]: E0113 21:22:41.997480 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.997683 kubelet[2494]: E0113 21:22:41.997669 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.997683 kubelet[2494]: W0113 21:22:41.997679 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.997735 kubelet[2494]: E0113 21:22:41.997693 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.997917 kubelet[2494]: E0113 21:22:41.997901 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.997917 kubelet[2494]: W0113 21:22:41.997914 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.997967 kubelet[2494]: E0113 21:22:41.997928 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.998152 kubelet[2494]: E0113 21:22:41.998137 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.998152 kubelet[2494]: W0113 21:22:41.998149 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.998210 kubelet[2494]: E0113 21:22:41.998162 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.998409 kubelet[2494]: E0113 21:22:41.998393 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.998409 kubelet[2494]: W0113 21:22:41.998406 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.998457 kubelet[2494]: E0113 21:22:41.998419 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.998613 kubelet[2494]: E0113 21:22:41.998598 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.998613 kubelet[2494]: W0113 21:22:41.998609 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.998653 kubelet[2494]: E0113 21:22:41.998619 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.998788 kubelet[2494]: E0113 21:22:41.998774 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.998788 kubelet[2494]: W0113 21:22:41.998784 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.998838 kubelet[2494]: E0113 21:22:41.998795 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.998973 kubelet[2494]: E0113 21:22:41.998958 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.998973 kubelet[2494]: W0113 21:22:41.998970 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.999022 kubelet[2494]: E0113 21:22:41.998982 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.999174 kubelet[2494]: E0113 21:22:41.999160 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.999174 kubelet[2494]: W0113 21:22:41.999171 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.999219 kubelet[2494]: E0113 21:22:41.999179 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:41.999541 kubelet[2494]: E0113 21:22:41.999526 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:41.999541 kubelet[2494]: W0113 21:22:41.999536 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:41.999592 kubelet[2494]: E0113 21:22:41.999545 2494 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:42.557936 containerd[1470]: time="2025-01-13T21:22:42.557887448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:42.585013 containerd[1470]: time="2025-01-13T21:22:42.584937120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 21:22:42.624179 containerd[1470]: time="2025-01-13T21:22:42.624135205Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:42.626757 containerd[1470]: time="2025-01-13T21:22:42.626708650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:42.627273 containerd[1470]: time="2025-01-13T21:22:42.627243117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.893103846s" Jan 13 21:22:42.627361 containerd[1470]: time="2025-01-13T21:22:42.627271880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:22:42.629435 containerd[1470]: time="2025-01-13T21:22:42.629399090Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:22:42.644165 containerd[1470]: time="2025-01-13T21:22:42.644122942Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e\"" Jan 13 21:22:42.644586 containerd[1470]: time="2025-01-13T21:22:42.644562794Z" level=info msg="StartContainer for \"021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e\"" Jan 13 21:22:42.677169 systemd[1]: run-containerd-runc-k8s.io-021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e-runc.Xo7jKW.mount: Deactivated successfully. Jan 13 21:22:42.686404 systemd[1]: Started cri-containerd-021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e.scope - libcontainer container 021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e. Jan 13 21:22:42.720152 containerd[1470]: time="2025-01-13T21:22:42.720099523Z" level=info msg="StartContainer for \"021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e\" returns successfully" Jan 13 21:22:42.735008 systemd[1]: cri-containerd-021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e.scope: Deactivated successfully. Jan 13 21:22:42.807719 kubelet[2494]: E0113 21:22:42.807644 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:42.941204 kubelet[2494]: E0113 21:22:42.941003 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:43.278130 containerd[1470]: time="2025-01-13T21:22:43.277789355Z" level=info msg="shim disconnected" id=021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e namespace=k8s.io Jan 13 21:22:43.278130 containerd[1470]: time="2025-01-13T21:22:43.277967253Z" level=warning msg="cleaning up after shim disconnected" id=021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e namespace=k8s.io Jan 13 21:22:43.278130 containerd[1470]: time="2025-01-13T21:22:43.277990316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:43.639922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-021a2af3da8375299af306c7535ec9b5ee33a25d98e8f9648d4ee5157e8bf44e-rootfs.mount: Deactivated successfully. Jan 13 21:22:43.943985 kubelet[2494]: E0113 21:22:43.943833 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:43.944704 containerd[1470]: time="2025-01-13T21:22:43.944655383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:22:44.807276 kubelet[2494]: E0113 21:22:44.807216 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:46.809679 kubelet[2494]: E0113 21:22:46.809613 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:48.808499 kubelet[2494]: E0113 21:22:48.808419 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:50.038933 kubelet[2494]: I0113 21:22:50.038870 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:50.039777 kubelet[2494]: E0113 21:22:50.039397 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:50.250674 containerd[1470]: time="2025-01-13T21:22:50.250610156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:50.251818 containerd[1470]: time="2025-01-13T21:22:50.251724207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:22:50.253761 containerd[1470]: time="2025-01-13T21:22:50.253698216Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:50.256806 containerd[1470]: time="2025-01-13T21:22:50.256748716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:50.257400 containerd[1470]: time="2025-01-13T21:22:50.257362096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.312655558s" Jan 13 21:22:50.257457 containerd[1470]: time="2025-01-13T21:22:50.257398303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:22:50.260268 containerd[1470]: time="2025-01-13T21:22:50.260235526Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:22:50.282926 containerd[1470]: time="2025-01-13T21:22:50.282825655Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e\"" Jan 13 21:22:50.283766 containerd[1470]: time="2025-01-13T21:22:50.283711571Z" level=info msg="StartContainer for \"32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e\"" Jan 13 21:22:50.327742 systemd[1]: Started cri-containerd-32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e.scope - libcontainer container 32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e. Jan 13 21:22:50.372051 containerd[1470]: time="2025-01-13T21:22:50.371982379Z" level=info msg="StartContainer for \"32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e\" returns successfully" Jan 13 21:22:50.808374 kubelet[2494]: E0113 21:22:50.808134 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:50.957234 kubelet[2494]: E0113 21:22:50.957184 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:50.957418 kubelet[2494]: E0113 21:22:50.957267 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:51.507402 systemd[1]: cri-containerd-32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e.scope: Deactivated successfully. Jan 13 21:22:51.528798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e-rootfs.mount: Deactivated successfully. Jan 13 21:22:51.538681 kubelet[2494]: I0113 21:22:51.538643 2494 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:22:51.958957 kubelet[2494]: E0113 21:22:51.958898 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:52.066860 kubelet[2494]: I0113 21:22:52.066809 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8db2a05b-6de2-4a2c-8a45-6f493623a948-calico-apiserver-certs\") pod \"calico-apiserver-7bff5578d8-wgmqb\" (UID: \"8db2a05b-6de2-4a2c-8a45-6f493623a948\") " pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" Jan 13 21:22:52.066860 kubelet[2494]: I0113 21:22:52.066854 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4rm\" (UniqueName: \"kubernetes.io/projected/8db2a05b-6de2-4a2c-8a45-6f493623a948-kube-api-access-cq4rm\") pod \"calico-apiserver-7bff5578d8-wgmqb\" (UID: \"8db2a05b-6de2-4a2c-8a45-6f493623a948\") " pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" Jan 13 21:22:52.145470 systemd[1]: Created slice kubepods-besteffort-pod8db2a05b_6de2_4a2c_8a45_6f493623a948.slice - libcontainer container kubepods-besteffort-pod8db2a05b_6de2_4a2c_8a45_6f493623a948.slice. Jan 13 21:22:52.151173 systemd[1]: Created slice kubepods-burstable-pod84cfaf78_a4f0_4a07_a518_aac1fc6dfb0c.slice - libcontainer container kubepods-burstable-pod84cfaf78_a4f0_4a07_a518_aac1fc6dfb0c.slice. Jan 13 21:22:52.157447 systemd[1]: Created slice kubepods-besteffort-pod5431517f_80a8_45c9_b517_ab6eb8f8217a.slice - libcontainer container kubepods-besteffort-pod5431517f_80a8_45c9_b517_ab6eb8f8217a.slice. Jan 13 21:22:52.158709 containerd[1470]: time="2025-01-13T21:22:52.158648368Z" level=info msg="shim disconnected" id=32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e namespace=k8s.io Jan 13 21:22:52.159208 containerd[1470]: time="2025-01-13T21:22:52.158708019Z" level=warning msg="cleaning up after shim disconnected" id=32223a70824203f5d0fc620e8ace8e7ea9d99db254232889af058a20dfb3ee4e namespace=k8s.io Jan 13 21:22:52.159208 containerd[1470]: time="2025-01-13T21:22:52.158721745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:52.162654 systemd[1]: Created slice kubepods-burstable-poddc0cbdaa_90e4_4882_a40a_2f9f6a0b3e5a.slice - libcontainer container kubepods-burstable-poddc0cbdaa_90e4_4882_a40a_2f9f6a0b3e5a.slice. Jan 13 21:22:52.167993 kubelet[2494]: I0113 21:22:52.167949 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5431517f-80a8-45c9-b517-ab6eb8f8217a-calico-apiserver-certs\") pod \"calico-apiserver-7bff5578d8-dbz9n\" (UID: \"5431517f-80a8-45c9-b517-ab6eb8f8217a\") " pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" Jan 13 21:22:52.168073 kubelet[2494]: I0113 21:22:52.168006 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c-config-volume\") pod \"coredns-6f6b679f8f-jdsfv\" (UID: \"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c\") " pod="kube-system/coredns-6f6b679f8f-jdsfv" Jan 13 21:22:52.168073 kubelet[2494]: I0113 21:22:52.168062 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kp22\" (UniqueName: \"kubernetes.io/projected/84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c-kube-api-access-5kp22\") pod \"coredns-6f6b679f8f-jdsfv\" (UID: \"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c\") " pod="kube-system/coredns-6f6b679f8f-jdsfv" Jan 13 21:22:52.168141 kubelet[2494]: I0113 21:22:52.168091 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9wb\" (UniqueName: \"kubernetes.io/projected/5431517f-80a8-45c9-b517-ab6eb8f8217a-kube-api-access-lg9wb\") pod \"calico-apiserver-7bff5578d8-dbz9n\" (UID: \"5431517f-80a8-45c9-b517-ab6eb8f8217a\") " pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" Jan 13 21:22:52.169411 systemd[1]: Created slice kubepods-besteffort-pod720f2bda_e32c_4788_8276_d58130a626c1.slice - libcontainer container kubepods-besteffort-pod720f2bda_e32c_4788_8276_d58130a626c1.slice. Jan 13 21:22:52.268990 kubelet[2494]: I0113 21:22:52.268864 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a-config-volume\") pod \"coredns-6f6b679f8f-hch8x\" (UID: \"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a\") " pod="kube-system/coredns-6f6b679f8f-hch8x" Jan 13 21:22:52.268990 kubelet[2494]: I0113 21:22:52.268918 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbbn\" (UniqueName: \"kubernetes.io/projected/dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a-kube-api-access-wpbbn\") pod \"coredns-6f6b679f8f-hch8x\" (UID: \"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a\") " pod="kube-system/coredns-6f6b679f8f-hch8x" Jan 13 21:22:52.268990 kubelet[2494]: I0113 21:22:52.268949 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/720f2bda-e32c-4788-8276-d58130a626c1-tigera-ca-bundle\") pod \"calico-kube-controllers-69bcc55845-fstwj\" (UID: \"720f2bda-e32c-4788-8276-d58130a626c1\") " pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" Jan 13 21:22:52.268990 kubelet[2494]: I0113 21:22:52.268974 2494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkjqh\" (UniqueName: \"kubernetes.io/projected/720f2bda-e32c-4788-8276-d58130a626c1-kube-api-access-jkjqh\") pod \"calico-kube-controllers-69bcc55845-fstwj\" (UID: \"720f2bda-e32c-4788-8276-d58130a626c1\") " pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" Jan 13 21:22:52.459091 kubelet[2494]: E0113 21:22:52.459014 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:52.460072 containerd[1470]: time="2025-01-13T21:22:52.459882075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-wgmqb,Uid:8db2a05b-6de2-4a2c-8a45-6f493623a948,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:52.460766 containerd[1470]: time="2025-01-13T21:22:52.459908354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdsfv,Uid:84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:52.460766 containerd[1470]: time="2025-01-13T21:22:52.460541994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-dbz9n,Uid:5431517f-80a8-45c9-b517-ab6eb8f8217a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:52.466674 kubelet[2494]: E0113 21:22:52.466624 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:52.467570 containerd[1470]: time="2025-01-13T21:22:52.467520057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hch8x,Uid:dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:52.479643 containerd[1470]: time="2025-01-13T21:22:52.479531227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bcc55845-fstwj,Uid:720f2bda-e32c-4788-8276-d58130a626c1,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:52.601738 containerd[1470]: time="2025-01-13T21:22:52.601679048Z" level=error msg="Failed to destroy network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.603422 containerd[1470]: time="2025-01-13T21:22:52.602680456Z" level=error msg="encountered an error cleaning up failed sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.603422 containerd[1470]: time="2025-01-13T21:22:52.602733786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-wgmqb,Uid:8db2a05b-6de2-4a2c-8a45-6f493623a948,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.603757 kubelet[2494]: E0113 21:22:52.603704 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.604136 kubelet[2494]: E0113 21:22:52.603788 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" Jan 13 21:22:52.604136 kubelet[2494]: E0113 21:22:52.603811 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" Jan 13 21:22:52.604136 kubelet[2494]: E0113 21:22:52.603865 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bff5578d8-wgmqb_calico-apiserver(8db2a05b-6de2-4a2c-8a45-6f493623a948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bff5578d8-wgmqb_calico-apiserver(8db2a05b-6de2-4a2c-8a45-6f493623a948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" podUID="8db2a05b-6de2-4a2c-8a45-6f493623a948" Jan 13 21:22:52.604652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9-shm.mount: Deactivated successfully. Jan 13 21:22:52.605133 containerd[1470]: time="2025-01-13T21:22:52.605096925Z" level=error msg="Failed to destroy network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.605667 containerd[1470]: time="2025-01-13T21:22:52.605640830Z" level=error msg="encountered an error cleaning up failed sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.605748 containerd[1470]: time="2025-01-13T21:22:52.605679182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-dbz9n,Uid:5431517f-80a8-45c9-b517-ab6eb8f8217a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.606769 containerd[1470]: time="2025-01-13T21:22:52.606360413Z" level=error msg="Failed to destroy network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.606827 kubelet[2494]: E0113 21:22:52.605888 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.606827 kubelet[2494]: E0113 21:22:52.605937 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" Jan 13 21:22:52.606827 kubelet[2494]: E0113 21:22:52.605954 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" Jan 13 21:22:52.606919 kubelet[2494]: E0113 21:22:52.605987 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bff5578d8-dbz9n_calico-apiserver(5431517f-80a8-45c9-b517-ab6eb8f8217a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bff5578d8-dbz9n_calico-apiserver(5431517f-80a8-45c9-b517-ab6eb8f8217a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" podUID="5431517f-80a8-45c9-b517-ab6eb8f8217a" Jan 13 21:22:52.607845 containerd[1470]: time="2025-01-13T21:22:52.607767709Z" level=error msg="encountered an error cleaning up failed sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.607889 containerd[1470]: time="2025-01-13T21:22:52.607850854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdsfv,Uid:84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.608169 kubelet[2494]: E0113 21:22:52.608113 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.608215 kubelet[2494]: E0113 21:22:52.608200 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jdsfv" Jan 13 21:22:52.608247 kubelet[2494]: E0113 21:22:52.608220 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jdsfv" Jan 13 21:22:52.609876 kubelet[2494]: E0113 21:22:52.608263 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jdsfv_kube-system(84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jdsfv_kube-system(84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jdsfv" podUID="84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c" Jan 13 21:22:52.608804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350-shm.mount: Deactivated successfully. Jan 13 21:22:52.608895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c-shm.mount: Deactivated successfully. Jan 13 21:22:52.814884 systemd[1]: Created slice kubepods-besteffort-podbea69c22_42f9_473c_8e07_d63b3f3fd2a2.slice - libcontainer container kubepods-besteffort-podbea69c22_42f9_473c_8e07_d63b3f3fd2a2.slice. Jan 13 21:22:52.817564 containerd[1470]: time="2025-01-13T21:22:52.817475863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r48mv,Uid:bea69c22-42f9-473c-8e07-d63b3f3fd2a2,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:52.870246 containerd[1470]: time="2025-01-13T21:22:52.870071001Z" level=error msg="Failed to destroy network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.870643 containerd[1470]: time="2025-01-13T21:22:52.870605999Z" level=error msg="encountered an error cleaning up failed sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.870709 containerd[1470]: time="2025-01-13T21:22:52.870676260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hch8x,Uid:dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.871108 kubelet[2494]: E0113 21:22:52.871039 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.871219 kubelet[2494]: E0113 21:22:52.871125 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hch8x" Jan 13 21:22:52.871336 kubelet[2494]: E0113 21:22:52.871229 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hch8x" Jan 13 21:22:52.871414 kubelet[2494]: E0113 21:22:52.871335 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hch8x_kube-system(dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hch8x_kube-system(dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hch8x" podUID="dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a" Jan 13 21:22:52.958170 containerd[1470]: time="2025-01-13T21:22:52.957985982Z" level=error msg="Failed to destroy network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.958766 containerd[1470]: time="2025-01-13T21:22:52.958704663Z" level=error msg="encountered an error cleaning up failed sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.958924 containerd[1470]: time="2025-01-13T21:22:52.958773030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bcc55845-fstwj,Uid:720f2bda-e32c-4788-8276-d58130a626c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.959084 kubelet[2494]: E0113 21:22:52.959040 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:52.959168 kubelet[2494]: E0113 21:22:52.959126 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" Jan 13 21:22:52.959168 kubelet[2494]: E0113 21:22:52.959155 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" Jan 13 21:22:52.959632 kubelet[2494]: E0113 21:22:52.959208 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69bcc55845-fstwj_calico-system(720f2bda-e32c-4788-8276-d58130a626c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69bcc55845-fstwj_calico-system(720f2bda-e32c-4788-8276-d58130a626c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" podUID="720f2bda-e32c-4788-8276-d58130a626c1" Jan 13 21:22:52.962990 kubelet[2494]: E0113 21:22:52.962964 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:52.964758 kubelet[2494]: I0113 21:22:52.964625 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:22:52.966252 containerd[1470]: time="2025-01-13T21:22:52.965722307Z" level=info msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" Jan 13 21:22:52.966252 containerd[1470]: time="2025-01-13T21:22:52.965895891Z" level=info msg="Ensure that sandbox dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912 in task-service has been cleanup successfully" Jan 13 21:22:52.968196 kubelet[2494]: I0113 21:22:52.967383 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:22:52.968316 containerd[1470]: time="2025-01-13T21:22:52.967975812Z" level=info msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" Jan 13 21:22:52.968316 containerd[1470]: time="2025-01-13T21:22:52.968192787Z" level=info msg="Ensure that sandbox e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c in task-service has been cleanup successfully" Jan 13 21:22:52.969378 containerd[1470]: time="2025-01-13T21:22:52.969345828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:22:52.972685 kubelet[2494]: I0113 21:22:52.972657 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:22:52.973331 containerd[1470]: time="2025-01-13T21:22:52.973196524Z" level=info msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" Jan 13 21:22:52.973457 containerd[1470]: time="2025-01-13T21:22:52.973410482Z" level=info msg="Ensure that sandbox 8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd in task-service has been cleanup successfully" Jan 13 21:22:52.974956 kubelet[2494]: I0113 21:22:52.974936 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:22:52.975417 containerd[1470]: time="2025-01-13T21:22:52.975382472Z" level=info msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" Jan 13 21:22:52.975808 containerd[1470]: time="2025-01-13T21:22:52.975617651Z" level=info msg="Ensure that sandbox daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350 in task-service has been cleanup successfully" Jan 13 21:22:52.981468 kubelet[2494]: I0113 21:22:52.981422 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:22:52.985831 containerd[1470]: time="2025-01-13T21:22:52.985606509Z" level=info msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" Jan 13 21:22:52.987536 containerd[1470]: time="2025-01-13T21:22:52.987356384Z" level=info msg="Ensure that sandbox c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9 in task-service has been cleanup successfully" Jan 13 21:22:53.042349 containerd[1470]: time="2025-01-13T21:22:53.040301377Z" level=error msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" failed" error="failed to destroy network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.043469 kubelet[2494]: E0113 21:22:53.043425 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:22:53.043582 kubelet[2494]: E0113 21:22:53.043518 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c"} Jan 13 21:22:53.043627 kubelet[2494]: E0113 21:22:53.043607 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:53.043718 kubelet[2494]: E0113 21:22:53.043643 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jdsfv" podUID="84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c" Jan 13 21:22:53.044190 containerd[1470]: time="2025-01-13T21:22:53.044142179Z" level=error msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" failed" error="failed to destroy network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.044321 kubelet[2494]: E0113 21:22:53.044251 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:22:53.044321 kubelet[2494]: E0113 21:22:53.044275 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912"} Jan 13 21:22:53.044406 kubelet[2494]: E0113 21:22:53.044319 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"720f2bda-e32c-4788-8276-d58130a626c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:53.044406 kubelet[2494]: E0113 21:22:53.044335 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"720f2bda-e32c-4788-8276-d58130a626c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" podUID="720f2bda-e32c-4788-8276-d58130a626c1" Jan 13 21:22:53.051569 containerd[1470]: time="2025-01-13T21:22:53.051502086Z" level=error msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" failed" error="failed to destroy network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.051716 containerd[1470]: time="2025-01-13T21:22:53.051506795Z" level=error msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" failed" error="failed to destroy network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.051902 kubelet[2494]: E0113 21:22:53.051861 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:22:53.051963 kubelet[2494]: E0113 21:22:53.051925 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350"} Jan 13 21:22:53.051988 kubelet[2494]: E0113 21:22:53.051969 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5431517f-80a8-45c9-b517-ab6eb8f8217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:53.052058 kubelet[2494]: E0113 21:22:53.051861 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:22:53.052058 kubelet[2494]: E0113 21:22:53.051999 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5431517f-80a8-45c9-b517-ab6eb8f8217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" podUID="5431517f-80a8-45c9-b517-ab6eb8f8217a" Jan 13 21:22:53.052058 kubelet[2494]: E0113 21:22:53.052013 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9"} Jan 13 21:22:53.052058 kubelet[2494]: E0113 21:22:53.052051 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8db2a05b-6de2-4a2c-8a45-6f493623a948\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:53.052173 kubelet[2494]: E0113 21:22:53.052078 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8db2a05b-6de2-4a2c-8a45-6f493623a948\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" podUID="8db2a05b-6de2-4a2c-8a45-6f493623a948" Jan 13 21:22:53.054593 containerd[1470]: time="2025-01-13T21:22:53.054558140Z" level=error msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" failed" error="failed to destroy network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.054722 kubelet[2494]: E0113 21:22:53.054690 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:22:53.054797 kubelet[2494]: E0113 21:22:53.054720 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd"} Jan 13 21:22:53.054797 kubelet[2494]: E0113 21:22:53.054743 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:53.054797 kubelet[2494]: E0113 21:22:53.054762 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hch8x" podUID="dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a" Jan 13 21:22:53.058012 containerd[1470]: time="2025-01-13T21:22:53.057933578Z" level=error msg="Failed to destroy network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.058387 containerd[1470]: time="2025-01-13T21:22:53.058345459Z" level=error msg="encountered an error cleaning up failed sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.058438 containerd[1470]: time="2025-01-13T21:22:53.058409192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r48mv,Uid:bea69c22-42f9-473c-8e07-d63b3f3fd2a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.058619 kubelet[2494]: E0113 21:22:53.058581 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:53.058700 kubelet[2494]: E0113 21:22:53.058632 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:53.058700 kubelet[2494]: E0113 21:22:53.058649 2494 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r48mv" Jan 13 21:22:53.058700 kubelet[2494]: E0113 21:22:53.058691 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r48mv_calico-system(bea69c22-42f9-473c-8e07-d63b3f3fd2a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r48mv_calico-system(bea69c22-42f9-473c-8e07-d63b3f3fd2a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:53.530491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd-shm.mount: Deactivated successfully. Jan 13 21:22:53.984189 kubelet[2494]: I0113 21:22:53.984138 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:22:53.984869 containerd[1470]: time="2025-01-13T21:22:53.984829262Z" level=info msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" Jan 13 21:22:53.985147 containerd[1470]: time="2025-01-13T21:22:53.985052500Z" level=info msg="Ensure that sandbox b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225 in task-service has been cleanup successfully" Jan 13 21:22:54.011087 containerd[1470]: time="2025-01-13T21:22:54.011020129Z" level=error msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" failed" error="failed to destroy network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:54.011360 kubelet[2494]: E0113 21:22:54.011302 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:22:54.011430 kubelet[2494]: E0113 21:22:54.011362 2494 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225"} Jan 13 21:22:54.011430 kubelet[2494]: E0113 21:22:54.011407 2494 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:54.011549 kubelet[2494]: E0113 21:22:54.011435 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bea69c22-42f9-473c-8e07-d63b3f3fd2a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r48mv" podUID="bea69c22-42f9-473c-8e07-d63b3f3fd2a2" Jan 13 21:22:59.645183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313785955.mount: Deactivated successfully. Jan 13 21:23:00.211374 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:48346.service - OpenSSH per-connection server daemon (10.0.0.1:48346). Jan 13 21:23:00.253274 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 48346 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:00.255083 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:00.260136 systemd-logind[1451]: New session 8 of user core. Jan 13 21:23:00.269511 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:23:00.448629 sshd[3700]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:00.453258 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:48346.service: Deactivated successfully. Jan 13 21:23:00.455485 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:23:00.456183 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:23:00.457093 systemd-logind[1451]: Removed session 8. Jan 13 21:23:01.060379 containerd[1470]: time="2025-01-13T21:23:01.060235250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:01.093002 containerd[1470]: time="2025-01-13T21:23:01.092943023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:23:01.097729 containerd[1470]: time="2025-01-13T21:23:01.097687738Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:01.116740 containerd[1470]: time="2025-01-13T21:23:01.116679245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:01.117590 containerd[1470]: time="2025-01-13T21:23:01.117544590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.148159979s" Jan 13 21:23:01.117590 containerd[1470]: time="2025-01-13T21:23:01.117581100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:23:01.153862 containerd[1470]: time="2025-01-13T21:23:01.153794347Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:23:01.416716 containerd[1470]: time="2025-01-13T21:23:01.416561894Z" level=info msg="CreateContainer within sandbox \"b1af8de6989e11cc6c9a3d3f224b129b25670061f74ef774e528e9acdd22f0b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b03c207d2d92735e25f339ec3b14eb2560aabefb0ca6caf2106e434be7629c5\"" Jan 13 21:23:01.418178 containerd[1470]: time="2025-01-13T21:23:01.418126116Z" level=info msg="StartContainer for \"7b03c207d2d92735e25f339ec3b14eb2560aabefb0ca6caf2106e434be7629c5\"" Jan 13 21:23:01.497500 systemd[1]: Started cri-containerd-7b03c207d2d92735e25f339ec3b14eb2560aabefb0ca6caf2106e434be7629c5.scope - libcontainer container 7b03c207d2d92735e25f339ec3b14eb2560aabefb0ca6caf2106e434be7629c5. Jan 13 21:23:01.904084 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:23:01.904315 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:23:02.068740 containerd[1470]: time="2025-01-13T21:23:02.068695516Z" level=info msg="StartContainer for \"7b03c207d2d92735e25f339ec3b14eb2560aabefb0ca6caf2106e434be7629c5\" returns successfully" Jan 13 21:23:02.342532 kubelet[2494]: E0113 21:23:02.342496 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:02.389078 kubelet[2494]: I0113 21:23:02.389000 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6j7nt" podStartSLOduration=2.370055443 podStartE2EDuration="24.388982091s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="2025-01-13 21:22:39.099379298 +0000 UTC m=+11.394918719" lastFinishedPulling="2025-01-13 21:23:01.118305956 +0000 UTC m=+33.413845367" observedRunningTime="2025-01-13 21:23:02.387821092 +0000 UTC m=+34.683360523" watchObservedRunningTime="2025-01-13 21:23:02.388982091 +0000 UTC m=+34.684521512" Jan 13 21:23:03.344106 kubelet[2494]: E0113 21:23:03.344077 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:03.766319 kernel: bpftool[3950]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:23:03.808734 containerd[1470]: time="2025-01-13T21:23:03.808380649Z" level=info msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.875 [INFO][3975] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.876 [INFO][3975] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" iface="eth0" netns="/var/run/netns/cni-8a229497-8b91-2423-b40b-87f5ff37f072" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.876 [INFO][3975] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" iface="eth0" netns="/var/run/netns/cni-8a229497-8b91-2423-b40b-87f5ff37f072" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.876 [INFO][3975] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" iface="eth0" netns="/var/run/netns/cni-8a229497-8b91-2423-b40b-87f5ff37f072" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.876 [INFO][3975] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.876 [INFO][3975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.935 [INFO][3982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.935 [INFO][3982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.936 [INFO][3982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.942 [WARNING][3982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.942 [INFO][3982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.944 [INFO][3982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:03.950917 containerd[1470]: 2025-01-13 21:23:03.947 [INFO][3975] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:03.951477 containerd[1470]: time="2025-01-13T21:23:03.951108653Z" level=info msg="TearDown network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" successfully" Jan 13 21:23:03.951477 containerd[1470]: time="2025-01-13T21:23:03.951136005Z" level=info msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" returns successfully" Jan 13 21:23:03.951568 kubelet[2494]: E0113 21:23:03.951523 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:03.952992 containerd[1470]: time="2025-01-13T21:23:03.952667172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hch8x,Uid:dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a,Namespace:kube-system,Attempt:1,}" Jan 13 21:23:03.954398 systemd[1]: run-netns-cni\x2d8a229497\x2d8b91\x2d2423\x2db40b\x2d87f5ff37f072.mount: Deactivated successfully. Jan 13 21:23:04.050417 systemd-networkd[1404]: vxlan.calico: Link UP Jan 13 21:23:04.050424 systemd-networkd[1404]: vxlan.calico: Gained carrier Jan 13 21:23:04.120482 systemd-networkd[1404]: cali79b4d392503: Link UP Jan 13 21:23:04.120930 systemd-networkd[1404]: cali79b4d392503: Gained carrier Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.040 [INFO][3992] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hch8x-eth0 coredns-6f6b679f8f- kube-system dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a 852 0 2025-01-13 21:22:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hch8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79b4d392503 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.040 [INFO][3992] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.077 [INFO][4017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" HandleID="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.085 [INFO][4017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" HandleID="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hch8x", "timestamp":"2025-01-13 21:23:04.076977701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.085 [INFO][4017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.085 [INFO][4017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.085 [INFO][4017] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.087 [INFO][4017] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.095 [INFO][4017] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.099 [INFO][4017] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.101 [INFO][4017] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.103 [INFO][4017] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.103 [INFO][4017] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.105 [INFO][4017] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875 Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.108 [INFO][4017] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.113 [INFO][4017] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.114 [INFO][4017] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" host="localhost" Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.114 [INFO][4017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:04.138520 containerd[1470]: 2025-01-13 21:23:04.114 [INFO][4017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" HandleID="k8s-pod-network.5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.118 [INFO][3992] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hch8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hch8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b4d392503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.118 [INFO][3992] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.118 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79b4d392503 ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.120 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.121 [INFO][3992] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hch8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875", Pod:"coredns-6f6b679f8f-hch8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b4d392503", MAC:"f6:44:f2:0e:5e:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:04.139244 containerd[1470]: 2025-01-13 21:23:04.133 [INFO][3992] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875" Namespace="kube-system" Pod="coredns-6f6b679f8f-hch8x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:04.177332 containerd[1470]: time="2025-01-13T21:23:04.176561392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:04.177483 containerd[1470]: time="2025-01-13T21:23:04.177396436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:04.177483 containerd[1470]: time="2025-01-13T21:23:04.177413569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:04.177632 containerd[1470]: time="2025-01-13T21:23:04.177495866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:04.202573 systemd[1]: Started cri-containerd-5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875.scope - libcontainer container 5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875. Jan 13 21:23:04.218515 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:04.252361 containerd[1470]: time="2025-01-13T21:23:04.252268300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hch8x,Uid:dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875\"" Jan 13 21:23:04.253757 kubelet[2494]: E0113 21:23:04.253372 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:04.256634 containerd[1470]: time="2025-01-13T21:23:04.256479644Z" level=info msg="CreateContainer within sandbox \"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:04.275341 containerd[1470]: time="2025-01-13T21:23:04.275266223Z" level=info msg="CreateContainer within sandbox \"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7006344df32b311d1bd5f7fbd78fb10cab1ae93b93771c4e5b91fd51660e287\"" Jan 13 21:23:04.276514 containerd[1470]: time="2025-01-13T21:23:04.276472958Z" level=info msg="StartContainer for \"d7006344df32b311d1bd5f7fbd78fb10cab1ae93b93771c4e5b91fd51660e287\"" Jan 13 21:23:04.310528 systemd[1]: Started cri-containerd-d7006344df32b311d1bd5f7fbd78fb10cab1ae93b93771c4e5b91fd51660e287.scope - libcontainer container d7006344df32b311d1bd5f7fbd78fb10cab1ae93b93771c4e5b91fd51660e287. Jan 13 21:23:04.345508 containerd[1470]: time="2025-01-13T21:23:04.345414195Z" level=info msg="StartContainer for \"d7006344df32b311d1bd5f7fbd78fb10cab1ae93b93771c4e5b91fd51660e287\" returns successfully" Jan 13 21:23:04.356475 kubelet[2494]: E0113 21:23:04.356249 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:04.380072 kubelet[2494]: I0113 21:23:04.379805 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hch8x" podStartSLOduration=31.379782483 podStartE2EDuration="31.379782483s" podCreationTimestamp="2025-01-13 21:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:04.377840555 +0000 UTC m=+36.673379976" watchObservedRunningTime="2025-01-13 21:23:04.379782483 +0000 UTC m=+36.675321905" Jan 13 21:23:04.807946 containerd[1470]: time="2025-01-13T21:23:04.807809130Z" level=info msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.847 [INFO][4199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.847 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" iface="eth0" netns="/var/run/netns/cni-f246a960-a95d-fd0c-b4ea-34216688b827" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.848 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" iface="eth0" netns="/var/run/netns/cni-f246a960-a95d-fd0c-b4ea-34216688b827" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.848 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" iface="eth0" netns="/var/run/netns/cni-f246a960-a95d-fd0c-b4ea-34216688b827" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.848 [INFO][4199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.848 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.866 [INFO][4206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.866 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.867 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.872 [WARNING][4206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.872 [INFO][4206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.873 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:04.878784 containerd[1470]: 2025-01-13 21:23:04.876 [INFO][4199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:04.879749 containerd[1470]: time="2025-01-13T21:23:04.878958916Z" level=info msg="TearDown network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" successfully" Jan 13 21:23:04.879749 containerd[1470]: time="2025-01-13T21:23:04.878985317Z" level=info msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" returns successfully" Jan 13 21:23:04.879832 kubelet[2494]: E0113 21:23:04.879316 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:04.880528 containerd[1470]: time="2025-01-13T21:23:04.880454331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdsfv,Uid:84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c,Namespace:kube-system,Attempt:1,}" Jan 13 21:23:04.881427 systemd[1]: run-netns-cni\x2df246a960\x2da95d\x2dfd0c\x2db4ea\x2d34216688b827.mount: Deactivated successfully. Jan 13 21:23:04.983745 systemd-networkd[1404]: cali6b5cad46b8a: Link UP Jan 13 21:23:04.984002 systemd-networkd[1404]: cali6b5cad46b8a: Gained carrier Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.923 [INFO][4215] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0 coredns-6f6b679f8f- kube-system 84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c 871 0 2025-01-13 21:22:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-jdsfv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b5cad46b8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.923 [INFO][4215] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.949 [INFO][4227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" HandleID="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.956 [INFO][4227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" HandleID="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfcf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-jdsfv", "timestamp":"2025-01-13 21:23:04.949361505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.956 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.956 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.956 [INFO][4227] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.958 [INFO][4227] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.960 [INFO][4227] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.964 [INFO][4227] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.965 [INFO][4227] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.967 [INFO][4227] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.967 [INFO][4227] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.968 [INFO][4227] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894 Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.973 [INFO][4227] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.977 [INFO][4227] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.978 [INFO][4227] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" host="localhost" Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.978 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:04.998738 containerd[1470]: 2025-01-13 21:23:04.978 [INFO][4227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" HandleID="k8s-pod-network.5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.981 [INFO][4215] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-jdsfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5cad46b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.981 [INFO][4215] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.981 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b5cad46b8a ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.984 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.984 [INFO][4215] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894", Pod:"coredns-6f6b679f8f-jdsfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5cad46b8a", MAC:"22:af:49:a4:96:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:04.999342 containerd[1470]: 2025-01-13 21:23:04.993 [INFO][4215] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894" Namespace="kube-system" Pod="coredns-6f6b679f8f-jdsfv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:05.020916 containerd[1470]: time="2025-01-13T21:23:05.020550796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:05.020916 containerd[1470]: time="2025-01-13T21:23:05.020740178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:05.020916 containerd[1470]: time="2025-01-13T21:23:05.020771558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:05.021090 containerd[1470]: time="2025-01-13T21:23:05.020867191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:05.040430 systemd[1]: Started cri-containerd-5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894.scope - libcontainer container 5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894. Jan 13 21:23:05.055128 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:05.082539 containerd[1470]: time="2025-01-13T21:23:05.082490335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdsfv,Uid:84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894\"" Jan 13 21:23:05.083367 kubelet[2494]: E0113 21:23:05.083328 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:05.085965 containerd[1470]: time="2025-01-13T21:23:05.085933829Z" level=info msg="CreateContainer within sandbox \"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:05.099315 containerd[1470]: time="2025-01-13T21:23:05.099255499Z" level=info msg="CreateContainer within sandbox \"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61e936e74f4057ccb641ab252dfc6f0466b37b281e1afcc728783ebca36e2b0d\"" Jan 13 21:23:05.100195 containerd[1470]: time="2025-01-13T21:23:05.100157881Z" level=info msg="StartContainer for \"61e936e74f4057ccb641ab252dfc6f0466b37b281e1afcc728783ebca36e2b0d\"" Jan 13 21:23:05.127467 systemd[1]: Started cri-containerd-61e936e74f4057ccb641ab252dfc6f0466b37b281e1afcc728783ebca36e2b0d.scope - libcontainer container 61e936e74f4057ccb641ab252dfc6f0466b37b281e1afcc728783ebca36e2b0d. Jan 13 21:23:05.153210 containerd[1470]: time="2025-01-13T21:23:05.153156282Z" level=info msg="StartContainer for \"61e936e74f4057ccb641ab252dfc6f0466b37b281e1afcc728783ebca36e2b0d\" returns successfully" Jan 13 21:23:05.360360 kubelet[2494]: E0113 21:23:05.360209 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:05.361020 kubelet[2494]: E0113 21:23:05.360657 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:05.383270 kubelet[2494]: I0113 21:23:05.383190 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jdsfv" podStartSLOduration=32.38316853 podStartE2EDuration="32.38316853s" podCreationTimestamp="2025-01-13 21:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:05.37159665 +0000 UTC m=+37.667136081" watchObservedRunningTime="2025-01-13 21:23:05.38316853 +0000 UTC m=+37.678707951" Jan 13 21:23:05.465085 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:44004.service - OpenSSH per-connection server daemon (10.0.0.1:44004). Jan 13 21:23:05.512320 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 44004 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:05.514007 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:05.518698 systemd-logind[1451]: New session 9 of user core. Jan 13 21:23:05.534402 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:23:05.635471 systemd-networkd[1404]: cali79b4d392503: Gained IPv6LL Jan 13 21:23:05.664578 sshd[4335]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:05.669122 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:44004.service: Deactivated successfully. Jan 13 21:23:05.671602 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:23:05.672369 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:23:05.673214 systemd-logind[1451]: Removed session 9. Jan 13 21:23:05.699796 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Jan 13 21:23:05.808633 containerd[1470]: time="2025-01-13T21:23:05.808570936Z" level=info msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" Jan 13 21:23:05.808839 containerd[1470]: time="2025-01-13T21:23:05.808702477Z" level=info msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.856 [INFO][4383] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.856 [INFO][4383] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" iface="eth0" netns="/var/run/netns/cni-abd8b2a9-307a-b101-f6d8-165d1d84eb70" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.857 [INFO][4383] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" iface="eth0" netns="/var/run/netns/cni-abd8b2a9-307a-b101-f6d8-165d1d84eb70" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.857 [INFO][4383] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" iface="eth0" netns="/var/run/netns/cni-abd8b2a9-307a-b101-f6d8-165d1d84eb70" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.857 [INFO][4383] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.857 [INFO][4383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.882 [INFO][4398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.882 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.883 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.888 [WARNING][4398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.888 [INFO][4398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.889 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:05.893985 containerd[1470]: 2025-01-13 21:23:05.891 [INFO][4383] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:05.894829 containerd[1470]: time="2025-01-13T21:23:05.894422050Z" level=info msg="TearDown network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" successfully" Jan 13 21:23:05.894829 containerd[1470]: time="2025-01-13T21:23:05.894456396Z" level=info msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" returns successfully" Jan 13 21:23:05.895907 containerd[1470]: time="2025-01-13T21:23:05.895845096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-dbz9n,Uid:5431517f-80a8-45c9-b517-ab6eb8f8217a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:23:05.898398 systemd[1]: run-netns-cni\x2dabd8b2a9\x2d307a\x2db101\x2df6d8\x2d165d1d84eb70.mount: Deactivated successfully. Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.861 [INFO][4384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.861 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" iface="eth0" netns="/var/run/netns/cni-6ca54304-a1e9-e2de-6b9c-f0fa9dc6e532" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.861 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" iface="eth0" netns="/var/run/netns/cni-6ca54304-a1e9-e2de-6b9c-f0fa9dc6e532" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.862 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" iface="eth0" netns="/var/run/netns/cni-6ca54304-a1e9-e2de-6b9c-f0fa9dc6e532" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.862 [INFO][4384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.862 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.885 [INFO][4403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.885 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.889 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.895 [WARNING][4403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.895 [INFO][4403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.896 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:05.902492 containerd[1470]: 2025-01-13 21:23:05.899 [INFO][4384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:05.905243 containerd[1470]: time="2025-01-13T21:23:05.905208841Z" level=info msg="TearDown network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" successfully" Jan 13 21:23:05.905243 containerd[1470]: time="2025-01-13T21:23:05.905242094Z" level=info msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" returns successfully" Jan 13 21:23:05.905903 systemd[1]: run-netns-cni\x2d6ca54304\x2da1e9\x2de2de\x2d6b9c\x2df0fa9dc6e532.mount: Deactivated successfully. Jan 13 21:23:05.906081 containerd[1470]: time="2025-01-13T21:23:05.905993149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r48mv,Uid:bea69c22-42f9-473c-8e07-d63b3f3fd2a2,Namespace:calico-system,Attempt:1,}" Jan 13 21:23:06.215416 systemd-networkd[1404]: cali15d2bf27578: Link UP Jan 13 21:23:06.215850 systemd-networkd[1404]: cali15d2bf27578: Gained carrier Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.029 [INFO][4413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0 calico-apiserver-7bff5578d8- calico-apiserver 5431517f-80a8-45c9-b517-ab6eb8f8217a 900 0 2025-01-13 21:22:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bff5578d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bff5578d8-dbz9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15d2bf27578 [] []}} ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.029 [INFO][4413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.055 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" HandleID="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.063 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" HandleID="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bff5578d8-dbz9n", "timestamp":"2025-01-13 21:23:06.055620568 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.063 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.063 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.063 [INFO][4428] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.065 [INFO][4428] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.069 [INFO][4428] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.073 [INFO][4428] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.074 [INFO][4428] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.076 [INFO][4428] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.076 [INFO][4428] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.078 [INFO][4428] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234 Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.092 [INFO][4428] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.209 [INFO][4428] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.209 [INFO][4428] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" host="localhost" Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.209 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:06.275682 containerd[1470]: 2025-01-13 21:23:06.209 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" HandleID="k8s-pod-network.ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.212 [INFO][4413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5431517f-80a8-45c9-b517-ab6eb8f8217a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bff5578d8-dbz9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15d2bf27578", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.212 [INFO][4413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.212 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15d2bf27578 ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.217 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.217 [INFO][4413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5431517f-80a8-45c9-b517-ab6eb8f8217a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234", Pod:"calico-apiserver-7bff5578d8-dbz9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15d2bf27578", MAC:"5a:4f:d7:cc:db:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:06.276542 containerd[1470]: 2025-01-13 21:23:06.272 [INFO][4413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-dbz9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:06.335716 containerd[1470]: time="2025-01-13T21:23:06.335468312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:06.335716 containerd[1470]: time="2025-01-13T21:23:06.335528056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:06.335716 containerd[1470]: time="2025-01-13T21:23:06.335541691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.335716 containerd[1470]: time="2025-01-13T21:23:06.335639768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.348237 systemd-networkd[1404]: cali1e0b5de0705: Link UP Jan 13 21:23:06.349242 systemd-networkd[1404]: cali1e0b5de0705: Gained carrier Jan 13 21:23:06.364043 kubelet[2494]: E0113 21:23:06.364000 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:06.365804 kubelet[2494]: E0113 21:23:06.365783 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.220 [INFO][4437] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--r48mv-eth0 csi-node-driver- calico-system bea69c22-42f9-473c-8e07-d63b3f3fd2a2 901 0 2025-01-13 21:22:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-r48mv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e0b5de0705 [] []}} ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.220 [INFO][4437] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.295 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" HandleID="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.302 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" HandleID="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-r48mv", "timestamp":"2025-01-13 21:23:06.29553112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.302 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.302 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.302 [INFO][4454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.304 [INFO][4454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.307 [INFO][4454] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.322 [INFO][4454] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.324 [INFO][4454] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.327 [INFO][4454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.327 [INFO][4454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.328 [INFO][4454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636 Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.337 [INFO][4454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.342 [INFO][4454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.342 [INFO][4454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" host="localhost" Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.342 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:06.366913 containerd[1470]: 2025-01-13 21:23:06.343 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" HandleID="k8s-pod-network.3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.345 [INFO][4437] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r48mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea69c22-42f9-473c-8e07-d63b3f3fd2a2", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-r48mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e0b5de0705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.346 [INFO][4437] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.346 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e0b5de0705 ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.348 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.348 [INFO][4437] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r48mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea69c22-42f9-473c-8e07-d63b3f3fd2a2", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636", Pod:"csi-node-driver-r48mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e0b5de0705", MAC:"3e:b0:63:c6:16:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:06.367422 containerd[1470]: 2025-01-13 21:23:06.359 [INFO][4437] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636" Namespace="calico-system" Pod="csi-node-driver-r48mv" WorkloadEndpoint="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:06.372859 systemd[1]: Started cri-containerd-ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234.scope - libcontainer container ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234. Jan 13 21:23:06.388095 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:06.400020 containerd[1470]: time="2025-01-13T21:23:06.399663238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:06.400020 containerd[1470]: time="2025-01-13T21:23:06.399747289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:06.400020 containerd[1470]: time="2025-01-13T21:23:06.399766795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.400020 containerd[1470]: time="2025-01-13T21:23:06.399895311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.422131 containerd[1470]: time="2025-01-13T21:23:06.419803006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-dbz9n,Uid:5431517f-80a8-45c9-b517-ab6eb8f8217a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234\"" Jan 13 21:23:06.422131 containerd[1470]: time="2025-01-13T21:23:06.421321182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:23:06.434423 systemd[1]: Started cri-containerd-3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636.scope - libcontainer container 3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636. Jan 13 21:23:06.448573 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:06.460272 containerd[1470]: time="2025-01-13T21:23:06.460214893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r48mv,Uid:bea69c22-42f9-473c-8e07-d63b3f3fd2a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636\"" Jan 13 21:23:06.808472 containerd[1470]: time="2025-01-13T21:23:06.808424353Z" level=info msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" Jan 13 21:23:06.979454 systemd-networkd[1404]: cali6b5cad46b8a: Gained IPv6LL Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.963 [INFO][4587] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.963 [INFO][4587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" iface="eth0" netns="/var/run/netns/cni-5020b0c1-b45f-5c9a-bba6-2c62997efb3a" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.964 [INFO][4587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" iface="eth0" netns="/var/run/netns/cni-5020b0c1-b45f-5c9a-bba6-2c62997efb3a" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.964 [INFO][4587] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" iface="eth0" netns="/var/run/netns/cni-5020b0c1-b45f-5c9a-bba6-2c62997efb3a" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.964 [INFO][4587] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.964 [INFO][4587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.983 [INFO][4594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.983 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:06.983 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:07.045 [WARNING][4594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:07.045 [INFO][4594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:07.046 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:07.051359 containerd[1470]: 2025-01-13 21:23:07.049 [INFO][4587] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:07.052089 containerd[1470]: time="2025-01-13T21:23:07.051520521Z" level=info msg="TearDown network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" successfully" Jan 13 21:23:07.052089 containerd[1470]: time="2025-01-13T21:23:07.051546551Z" level=info msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" returns successfully" Jan 13 21:23:07.052309 containerd[1470]: time="2025-01-13T21:23:07.052266915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bcc55845-fstwj,Uid:720f2bda-e32c-4788-8276-d58130a626c1,Namespace:calico-system,Attempt:1,}" Jan 13 21:23:07.054124 systemd[1]: run-netns-cni\x2d5020b0c1\x2db45f\x2d5c9a\x2dbba6\x2d2c62997efb3a.mount: Deactivated successfully. Jan 13 21:23:07.367364 kubelet[2494]: E0113 21:23:07.367329 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:07.367876 kubelet[2494]: E0113 21:23:07.367340 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:07.496488 systemd-networkd[1404]: cali86b8ec95d9a: Link UP Jan 13 21:23:07.496735 systemd-networkd[1404]: cali86b8ec95d9a: Gained carrier Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.437 [INFO][4603] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0 calico-kube-controllers-69bcc55845- calico-system 720f2bda-e32c-4788-8276-d58130a626c1 917 0 2025-01-13 21:22:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69bcc55845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69bcc55845-fstwj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86b8ec95d9a [] []}} ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.438 [INFO][4603] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.463 [INFO][4616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" HandleID="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.470 [INFO][4616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" HandleID="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69bcc55845-fstwj", "timestamp":"2025-01-13 21:23:07.463403349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.470 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.470 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.470 [INFO][4616] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.472 [INFO][4616] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.475 [INFO][4616] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.479 [INFO][4616] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.480 [INFO][4616] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.482 [INFO][4616] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.482 [INFO][4616] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.483 [INFO][4616] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6 Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.486 [INFO][4616] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.490 [INFO][4616] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.490 [INFO][4616] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" host="localhost" Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.491 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:07.509424 containerd[1470]: 2025-01-13 21:23:07.491 [INFO][4616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" HandleID="k8s-pod-network.93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.493 [INFO][4603] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0", GenerateName:"calico-kube-controllers-69bcc55845-", Namespace:"calico-system", SelfLink:"", UID:"720f2bda-e32c-4788-8276-d58130a626c1", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bcc55845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69bcc55845-fstwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86b8ec95d9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.493 [INFO][4603] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.493 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86b8ec95d9a ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.495 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.496 [INFO][4603] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0", GenerateName:"calico-kube-controllers-69bcc55845-", Namespace:"calico-system", SelfLink:"", UID:"720f2bda-e32c-4788-8276-d58130a626c1", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bcc55845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6", Pod:"calico-kube-controllers-69bcc55845-fstwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86b8ec95d9a", MAC:"7e:e6:0a:67:ff:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:07.510032 containerd[1470]: 2025-01-13 21:23:07.505 [INFO][4603] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6" Namespace="calico-system" Pod="calico-kube-controllers-69bcc55845-fstwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:07.531546 containerd[1470]: time="2025-01-13T21:23:07.531420230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:07.531546 containerd[1470]: time="2025-01-13T21:23:07.531523727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:07.531711 containerd[1470]: time="2025-01-13T21:23:07.531554966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.531756 containerd[1470]: time="2025-01-13T21:23:07.531712908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.554424 systemd[1]: Started cri-containerd-93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6.scope - libcontainer container 93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6. Jan 13 21:23:07.555409 systemd-networkd[1404]: cali15d2bf27578: Gained IPv6LL Jan 13 21:23:07.566892 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:07.589485 containerd[1470]: time="2025-01-13T21:23:07.589427661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bcc55845-fstwj,Uid:720f2bda-e32c-4788-8276-d58130a626c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6\"" Jan 13 21:23:08.387449 systemd-networkd[1404]: cali1e0b5de0705: Gained IPv6LL Jan 13 21:23:08.808458 containerd[1470]: time="2025-01-13T21:23:08.808327341Z" level=info msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.902 [INFO][4698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.903 [INFO][4698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" iface="eth0" netns="/var/run/netns/cni-69a2309d-5eee-cf5a-07eb-919080168799" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.903 [INFO][4698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" iface="eth0" netns="/var/run/netns/cni-69a2309d-5eee-cf5a-07eb-919080168799" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.903 [INFO][4698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" iface="eth0" netns="/var/run/netns/cni-69a2309d-5eee-cf5a-07eb-919080168799" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.903 [INFO][4698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.903 [INFO][4698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.927 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.927 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.927 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.934 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.934 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.936 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:08.949726 containerd[1470]: 2025-01-13 21:23:08.940 [INFO][4698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:08.949658 systemd[1]: run-netns-cni\x2d69a2309d\x2d5eee\x2dcf5a\x2d07eb\x2d919080168799.mount: Deactivated successfully. Jan 13 21:23:08.951991 containerd[1470]: time="2025-01-13T21:23:08.950631283Z" level=info msg="TearDown network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" successfully" Jan 13 21:23:08.951991 containerd[1470]: time="2025-01-13T21:23:08.950663053Z" level=info msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" returns successfully" Jan 13 21:23:08.951991 containerd[1470]: time="2025-01-13T21:23:08.951267165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-wgmqb,Uid:8db2a05b-6de2-4a2c-8a45-6f493623a948,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:23:08.964466 systemd-networkd[1404]: cali86b8ec95d9a: Gained IPv6LL Jan 13 21:23:09.074040 containerd[1470]: time="2025-01-13T21:23:09.073856805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:09.075438 containerd[1470]: time="2025-01-13T21:23:09.075338188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:23:09.076479 containerd[1470]: time="2025-01-13T21:23:09.076433215Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:09.083172 containerd[1470]: time="2025-01-13T21:23:09.082568125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.661223337s" Jan 13 21:23:09.083172 containerd[1470]: time="2025-01-13T21:23:09.082603342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:23:09.086728 containerd[1470]: time="2025-01-13T21:23:09.086702463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:23:09.088703 containerd[1470]: time="2025-01-13T21:23:09.088648852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:09.089104 containerd[1470]: time="2025-01-13T21:23:09.089074253Z" level=info msg="CreateContainer within sandbox \"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:23:09.104347 containerd[1470]: time="2025-01-13T21:23:09.104144560Z" level=info msg="CreateContainer within sandbox \"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"345daa940912f9c1f132136b9920e7b0b14382755abdf5295b856c3259e0755c\"" Jan 13 21:23:09.105199 containerd[1470]: time="2025-01-13T21:23:09.105150486Z" level=info msg="StartContainer for \"345daa940912f9c1f132136b9920e7b0b14382755abdf5295b856c3259e0755c\"" Jan 13 21:23:09.139440 systemd[1]: Started cri-containerd-345daa940912f9c1f132136b9920e7b0b14382755abdf5295b856c3259e0755c.scope - libcontainer container 345daa940912f9c1f132136b9920e7b0b14382755abdf5295b856c3259e0755c. Jan 13 21:23:09.188092 systemd-networkd[1404]: calia3c2c59808d: Link UP Jan 13 21:23:09.188796 systemd-networkd[1404]: calia3c2c59808d: Gained carrier Jan 13 21:23:09.194536 containerd[1470]: time="2025-01-13T21:23:09.194482414Z" level=info msg="StartContainer for \"345daa940912f9c1f132136b9920e7b0b14382755abdf5295b856c3259e0755c\" returns successfully" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.117 [INFO][4718] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0 calico-apiserver-7bff5578d8- calico-apiserver 8db2a05b-6de2-4a2c-8a45-6f493623a948 931 0 2025-01-13 21:22:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bff5578d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bff5578d8-wgmqb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3c2c59808d [] []}} ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.117 [INFO][4718] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.147 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" HandleID="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.157 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" HandleID="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bff5578d8-wgmqb", "timestamp":"2025-01-13 21:23:09.147877876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.157 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.157 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.158 [INFO][4748] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.159 [INFO][4748] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.163 [INFO][4748] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.167 [INFO][4748] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.168 [INFO][4748] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.170 [INFO][4748] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.170 [INFO][4748] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.171 [INFO][4748] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046 Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.175 [INFO][4748] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.181 [INFO][4748] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.182 [INFO][4748] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" host="localhost" Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.182 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:09.204140 containerd[1470]: 2025-01-13 21:23:09.182 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" HandleID="k8s-pod-network.33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.185 [INFO][4718] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8db2a05b-6de2-4a2c-8a45-6f493623a948", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bff5578d8-wgmqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c2c59808d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.185 [INFO][4718] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.185 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3c2c59808d ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.188 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.189 [INFO][4718] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8db2a05b-6de2-4a2c-8a45-6f493623a948", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046", Pod:"calico-apiserver-7bff5578d8-wgmqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c2c59808d", MAC:"9a:cd:06:d6:c9:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:09.206119 containerd[1470]: 2025-01-13 21:23:09.198 [INFO][4718] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046" Namespace="calico-apiserver" Pod="calico-apiserver-7bff5578d8-wgmqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:09.229265 containerd[1470]: time="2025-01-13T21:23:09.229180933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:09.229265 containerd[1470]: time="2025-01-13T21:23:09.229224987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:09.229265 containerd[1470]: time="2025-01-13T21:23:09.229234896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:09.229429 containerd[1470]: time="2025-01-13T21:23:09.229323525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:09.253432 systemd[1]: Started cri-containerd-33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046.scope - libcontainer container 33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046. Jan 13 21:23:09.265624 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:23:09.295336 containerd[1470]: time="2025-01-13T21:23:09.295302280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bff5578d8-wgmqb,Uid:8db2a05b-6de2-4a2c-8a45-6f493623a948,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046\"" Jan 13 21:23:09.299047 containerd[1470]: time="2025-01-13T21:23:09.299023742Z" level=info msg="CreateContainer within sandbox \"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:23:09.312308 containerd[1470]: time="2025-01-13T21:23:09.312221481Z" level=info msg="CreateContainer within sandbox \"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"542474ee2eebcfdc3225fd4d6b4f7f0b267b028dade5a2b763a2eb9483d57546\"" Jan 13 21:23:09.312654 containerd[1470]: time="2025-01-13T21:23:09.312567821Z" level=info msg="StartContainer for \"542474ee2eebcfdc3225fd4d6b4f7f0b267b028dade5a2b763a2eb9483d57546\"" Jan 13 21:23:09.345430 systemd[1]: Started cri-containerd-542474ee2eebcfdc3225fd4d6b4f7f0b267b028dade5a2b763a2eb9483d57546.scope - libcontainer container 542474ee2eebcfdc3225fd4d6b4f7f0b267b028dade5a2b763a2eb9483d57546. Jan 13 21:23:09.404402 containerd[1470]: time="2025-01-13T21:23:09.404349896Z" level=info msg="StartContainer for \"542474ee2eebcfdc3225fd4d6b4f7f0b267b028dade5a2b763a2eb9483d57546\" returns successfully" Jan 13 21:23:09.874968 kubelet[2494]: I0113 21:23:09.873315 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bff5578d8-dbz9n" podStartSLOduration=29.207795427 podStartE2EDuration="31.873276469s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="2025-01-13 21:23:06.421043863 +0000 UTC m=+38.716583284" lastFinishedPulling="2025-01-13 21:23:09.086524905 +0000 UTC m=+41.382064326" observedRunningTime="2025-01-13 21:23:09.387850235 +0000 UTC m=+41.683389656" watchObservedRunningTime="2025-01-13 21:23:09.873276469 +0000 UTC m=+42.168815890" Jan 13 21:23:10.631816 kubelet[2494]: I0113 21:23:10.631752 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bff5578d8-wgmqb" podStartSLOduration=32.63173215 podStartE2EDuration="32.63173215s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:10.63112325 +0000 UTC m=+42.926662682" watchObservedRunningTime="2025-01-13 21:23:10.63173215 +0000 UTC m=+42.927271571" Jan 13 21:23:10.677478 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:44010.service - OpenSSH per-connection server daemon (10.0.0.1:44010). Jan 13 21:23:10.787986 sshd[4887]: Accepted publickey for core from 10.0.0.1 port 44010 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:10.789709 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:10.793805 systemd-logind[1451]: New session 10 of user core. Jan 13 21:23:10.801406 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:23:10.964472 sshd[4887]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:10.968506 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:44010.service: Deactivated successfully. Jan 13 21:23:10.970427 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:23:10.971020 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:23:10.972262 systemd-logind[1451]: Removed session 10. Jan 13 21:23:11.140593 systemd-networkd[1404]: calia3c2c59808d: Gained IPv6LL Jan 13 21:23:11.234962 containerd[1470]: time="2025-01-13T21:23:11.234837489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:11.235777 containerd[1470]: time="2025-01-13T21:23:11.235724678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:23:11.237054 containerd[1470]: time="2025-01-13T21:23:11.237030234Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:11.238975 containerd[1470]: time="2025-01-13T21:23:11.238948727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:11.239535 containerd[1470]: time="2025-01-13T21:23:11.239502592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.152594287s" Jan 13 21:23:11.239578 containerd[1470]: time="2025-01-13T21:23:11.239534733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:23:11.240435 containerd[1470]: time="2025-01-13T21:23:11.240404629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:23:11.241931 containerd[1470]: time="2025-01-13T21:23:11.241905888Z" level=info msg="CreateContainer within sandbox \"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:23:11.259222 containerd[1470]: time="2025-01-13T21:23:11.259106217Z" level=info msg="CreateContainer within sandbox \"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2318c881477e2f11ff36961978ff1aaaadd8026fae0c36a8111f1824f61011a1\"" Jan 13 21:23:11.259768 containerd[1470]: time="2025-01-13T21:23:11.259737219Z" level=info msg="StartContainer for \"2318c881477e2f11ff36961978ff1aaaadd8026fae0c36a8111f1824f61011a1\"" Jan 13 21:23:11.292510 systemd[1]: Started cri-containerd-2318c881477e2f11ff36961978ff1aaaadd8026fae0c36a8111f1824f61011a1.scope - libcontainer container 2318c881477e2f11ff36961978ff1aaaadd8026fae0c36a8111f1824f61011a1. Jan 13 21:23:11.323550 containerd[1470]: time="2025-01-13T21:23:11.323443657Z" level=info msg="StartContainer for \"2318c881477e2f11ff36961978ff1aaaadd8026fae0c36a8111f1824f61011a1\" returns successfully" Jan 13 21:23:14.969804 containerd[1470]: time="2025-01-13T21:23:14.969753231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:14.970535 containerd[1470]: time="2025-01-13T21:23:14.970480335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:23:14.971611 containerd[1470]: time="2025-01-13T21:23:14.971568354Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:14.973604 containerd[1470]: time="2025-01-13T21:23:14.973572175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:14.974233 containerd[1470]: time="2025-01-13T21:23:14.974196381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.733763539s" Jan 13 21:23:14.974233 containerd[1470]: time="2025-01-13T21:23:14.974225237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:23:14.975236 containerd[1470]: time="2025-01-13T21:23:14.975210260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:23:14.986060 containerd[1470]: time="2025-01-13T21:23:14.985426480Z" level=info msg="CreateContainer within sandbox \"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:23:15.095877 containerd[1470]: time="2025-01-13T21:23:15.095813811Z" level=info msg="CreateContainer within sandbox \"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ceba200f7af02cd959775c28036209de5123c5cf672e98061ae551761c361092\"" Jan 13 21:23:15.096516 containerd[1470]: time="2025-01-13T21:23:15.096481470Z" level=info msg="StartContainer for \"ceba200f7af02cd959775c28036209de5123c5cf672e98061ae551761c361092\"" Jan 13 21:23:15.126024 systemd[1]: Started cri-containerd-ceba200f7af02cd959775c28036209de5123c5cf672e98061ae551761c361092.scope - libcontainer container ceba200f7af02cd959775c28036209de5123c5cf672e98061ae551761c361092. Jan 13 21:23:15.309383 containerd[1470]: time="2025-01-13T21:23:15.307925030Z" level=info msg="StartContainer for \"ceba200f7af02cd959775c28036209de5123c5cf672e98061ae551761c361092\" returns successfully" Jan 13 21:23:15.410614 kubelet[2494]: I0113 21:23:15.410434 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69bcc55845-fstwj" podStartSLOduration=30.025945197 podStartE2EDuration="37.410413582s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="2025-01-13 21:23:07.590567424 +0000 UTC m=+39.886106855" lastFinishedPulling="2025-01-13 21:23:14.975035819 +0000 UTC m=+47.270575240" observedRunningTime="2025-01-13 21:23:15.409024872 +0000 UTC m=+47.704564293" watchObservedRunningTime="2025-01-13 21:23:15.410413582 +0000 UTC m=+47.705953003" Jan 13 21:23:15.975824 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:50626.service - OpenSSH per-connection server daemon (10.0.0.1:50626). Jan 13 21:23:16.016443 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 50626 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:16.018247 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:16.022427 systemd-logind[1451]: New session 11 of user core. Jan 13 21:23:16.037433 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:23:16.156886 sshd[5019]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:16.167066 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:50626.service: Deactivated successfully. Jan 13 21:23:16.169204 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:23:16.170771 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:23:16.189693 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:50628.service - OpenSSH per-connection server daemon (10.0.0.1:50628). Jan 13 21:23:16.191151 systemd-logind[1451]: Removed session 11. Jan 13 21:23:16.219023 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 50628 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:16.220449 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:16.224063 systemd-logind[1451]: New session 12 of user core. Jan 13 21:23:16.234427 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:23:16.354738 containerd[1470]: time="2025-01-13T21:23:16.354660510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.355476 containerd[1470]: time="2025-01-13T21:23:16.355398863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:23:16.358932 containerd[1470]: time="2025-01-13T21:23:16.358889888Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.361444 containerd[1470]: time="2025-01-13T21:23:16.361381363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.362201 containerd[1470]: time="2025-01-13T21:23:16.362061005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.3868217s" Jan 13 21:23:16.362201 containerd[1470]: time="2025-01-13T21:23:16.362095621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:23:16.372475 containerd[1470]: time="2025-01-13T21:23:16.372138247Z" level=info msg="CreateContainer within sandbox \"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:23:16.377276 sshd[5035]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:16.394227 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:50628.service: Deactivated successfully. Jan 13 21:23:16.396131 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:23:16.406082 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:23:16.410050 containerd[1470]: time="2025-01-13T21:23:16.410005006Z" level=info msg="CreateContainer within sandbox \"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"85988fe646447d18ea7d492e7966e537ad7d3bd060cf96b4c09a9abb2f23d532\"" Jan 13 21:23:16.412074 containerd[1470]: time="2025-01-13T21:23:16.411594326Z" level=info msg="StartContainer for \"85988fe646447d18ea7d492e7966e537ad7d3bd060cf96b4c09a9abb2f23d532\"" Jan 13 21:23:16.417775 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Jan 13 21:23:16.418868 systemd-logind[1451]: Removed session 12. Jan 13 21:23:16.446419 systemd[1]: Started cri-containerd-85988fe646447d18ea7d492e7966e537ad7d3bd060cf96b4c09a9abb2f23d532.scope - libcontainer container 85988fe646447d18ea7d492e7966e537ad7d3bd060cf96b4c09a9abb2f23d532. Jan 13 21:23:16.450841 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:16.452737 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:16.456415 systemd-logind[1451]: New session 13 of user core. Jan 13 21:23:16.464401 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:23:16.481208 containerd[1470]: time="2025-01-13T21:23:16.481161273Z" level=info msg="StartContainer for \"85988fe646447d18ea7d492e7966e537ad7d3bd060cf96b4c09a9abb2f23d532\" returns successfully" Jan 13 21:23:16.589478 sshd[5051]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:16.595178 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:50638.service: Deactivated successfully. Jan 13 21:23:16.597208 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:23:16.597934 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:23:16.598911 systemd-logind[1451]: Removed session 13. Jan 13 21:23:17.344653 kubelet[2494]: I0113 21:23:17.344594 2494 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:23:17.344653 kubelet[2494]: I0113 21:23:17.344658 2494 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:23:17.509471 kubelet[2494]: I0113 21:23:17.509166 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r48mv" podStartSLOduration=29.607691662 podStartE2EDuration="39.509148283s" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="2025-01-13 21:23:06.461568877 +0000 UTC m=+38.757108298" lastFinishedPulling="2025-01-13 21:23:16.363025498 +0000 UTC m=+48.658564919" observedRunningTime="2025-01-13 21:23:17.509021473 +0000 UTC m=+49.804560894" watchObservedRunningTime="2025-01-13 21:23:17.509148283 +0000 UTC m=+49.804687704" Jan 13 21:23:21.599970 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:50654.service - OpenSSH per-connection server daemon (10.0.0.1:50654). Jan 13 21:23:21.631742 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 50654 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:21.633118 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:21.636977 systemd-logind[1451]: New session 14 of user core. Jan 13 21:23:21.646414 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:23:21.753422 sshd[5112]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:21.757508 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:50654.service: Deactivated successfully. Jan 13 21:23:21.759564 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:23:21.760202 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:23:21.761020 systemd-logind[1451]: Removed session 14. Jan 13 21:23:26.765622 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:53838.service - OpenSSH per-connection server daemon (10.0.0.1:53838). Jan 13 21:23:26.801381 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 53838 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:26.803247 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:26.807781 systemd-logind[1451]: New session 15 of user core. Jan 13 21:23:26.817616 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:23:26.982332 sshd[5176]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:26.987098 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:53838.service: Deactivated successfully. Jan 13 21:23:26.989070 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:23:26.989829 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:23:26.990668 systemd-logind[1451]: Removed session 15. Jan 13 21:23:27.803686 containerd[1470]: time="2025-01-13T21:23:27.803632871Z" level=info msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.839 [WARNING][5205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0", GenerateName:"calico-kube-controllers-69bcc55845-", Namespace:"calico-system", SelfLink:"", UID:"720f2bda-e32c-4788-8276-d58130a626c1", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bcc55845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6", Pod:"calico-kube-controllers-69bcc55845-fstwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86b8ec95d9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.839 [INFO][5205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.839 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" iface="eth0" netns="" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.839 [INFO][5205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.839 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.858 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.858 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.858 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.863 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.863 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.864 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:27.869637 containerd[1470]: 2025-01-13 21:23:27.866 [INFO][5205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.869637 containerd[1470]: time="2025-01-13T21:23:27.869628812Z" level=info msg="TearDown network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" successfully" Jan 13 21:23:27.870129 containerd[1470]: time="2025-01-13T21:23:27.869661754Z" level=info msg="StopPodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" returns successfully" Jan 13 21:23:27.876775 containerd[1470]: time="2025-01-13T21:23:27.876715208Z" level=info msg="RemovePodSandbox for \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" Jan 13 21:23:27.878950 containerd[1470]: time="2025-01-13T21:23:27.878918383Z" level=info msg="Forcibly stopping sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\"" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.913 [WARNING][5236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0", GenerateName:"calico-kube-controllers-69bcc55845-", Namespace:"calico-system", SelfLink:"", UID:"720f2bda-e32c-4788-8276-d58130a626c1", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bcc55845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93b0ad9d1e95f3728a07a1bb8df743964484271ba2f09e7ec2622a25090a21f6", Pod:"calico-kube-controllers-69bcc55845-fstwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86b8ec95d9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.913 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.913 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" iface="eth0" netns="" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.913 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.913 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.933 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.933 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.933 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.937 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.937 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" HandleID="k8s-pod-network.dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Workload="localhost-k8s-calico--kube--controllers--69bcc55845--fstwj-eth0" Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.938 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:27.943346 containerd[1470]: 2025-01-13 21:23:27.940 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912" Jan 13 21:23:27.943798 containerd[1470]: time="2025-01-13T21:23:27.943393131Z" level=info msg="TearDown network for sandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" successfully" Jan 13 21:23:28.023610 containerd[1470]: time="2025-01-13T21:23:28.023563227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:28.023738 containerd[1470]: time="2025-01-13T21:23:28.023630314Z" level=info msg="RemovePodSandbox \"dd8ac61d1dac6ae43486f9a861ef369cebdf238b629255ecedfbf40c71964912\" returns successfully" Jan 13 21:23:28.024327 containerd[1470]: time="2025-01-13T21:23:28.024265667Z" level=info msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.059 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894", Pod:"coredns-6f6b679f8f-jdsfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5cad46b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.059 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.059 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" iface="eth0" netns="" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.059 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.059 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.080 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.081 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.081 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.086 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.086 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.087 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.092482 containerd[1470]: 2025-01-13 21:23:28.090 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.092905 containerd[1470]: time="2025-01-13T21:23:28.092512021Z" level=info msg="TearDown network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" successfully" Jan 13 21:23:28.092905 containerd[1470]: time="2025-01-13T21:23:28.092536829Z" level=info msg="StopPodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" returns successfully" Jan 13 21:23:28.093041 containerd[1470]: time="2025-01-13T21:23:28.093012620Z" level=info msg="RemovePodSandbox for \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" Jan 13 21:23:28.093064 containerd[1470]: time="2025-01-13T21:23:28.093044310Z" level=info msg="Forcibly stopping sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\"" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.131 [WARNING][5296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"84cfaf78-a4f0-4a07-a518-aac1fc6dfb0c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5afe3b595ea3e28e2ec879ec83c0d685f423f1b6abb763600cf5ac472f667894", Pod:"coredns-6f6b679f8f-jdsfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5cad46b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.131 [INFO][5296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.131 [INFO][5296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" iface="eth0" netns="" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.131 [INFO][5296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.131 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.150 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.150 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.150 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.155 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.155 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" HandleID="k8s-pod-network.e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Workload="localhost-k8s-coredns--6f6b679f8f--jdsfv-eth0" Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.156 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.161162 containerd[1470]: 2025-01-13 21:23:28.158 [INFO][5296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c" Jan 13 21:23:28.161626 containerd[1470]: time="2025-01-13T21:23:28.161207667Z" level=info msg="TearDown network for sandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" successfully" Jan 13 21:23:28.255309 containerd[1470]: time="2025-01-13T21:23:28.255234184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:28.255438 containerd[1470]: time="2025-01-13T21:23:28.255325086Z" level=info msg="RemovePodSandbox \"e658906902b1a074899b2498e5220da49548e8be43b1f2585c42eac62c29947c\" returns successfully" Jan 13 21:23:28.255878 containerd[1470]: time="2025-01-13T21:23:28.255849940Z" level=info msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.288 [WARNING][5325] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8db2a05b-6de2-4a2c-8a45-6f493623a948", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046", Pod:"calico-apiserver-7bff5578d8-wgmqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c2c59808d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.288 [INFO][5325] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.288 [INFO][5325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" iface="eth0" netns="" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.288 [INFO][5325] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.288 [INFO][5325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.307 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.307 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.307 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.312 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.312 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.313 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.318126 containerd[1470]: 2025-01-13 21:23:28.315 [INFO][5325] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.318738 containerd[1470]: time="2025-01-13T21:23:28.318161172Z" level=info msg="TearDown network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" successfully" Jan 13 21:23:28.318738 containerd[1470]: time="2025-01-13T21:23:28.318215926Z" level=info msg="StopPodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" returns successfully" Jan 13 21:23:28.318858 containerd[1470]: time="2025-01-13T21:23:28.318827785Z" level=info msg="RemovePodSandbox for \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" Jan 13 21:23:28.318891 containerd[1470]: time="2025-01-13T21:23:28.318866838Z" level=info msg="Forcibly stopping sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\"" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.355 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8db2a05b-6de2-4a2c-8a45-6f493623a948", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e55d0ec59996624f1bdce3ded5fcd4a89df91c5c6433ca8f72f44e246aa046", Pod:"calico-apiserver-7bff5578d8-wgmqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c2c59808d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.355 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.355 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" iface="eth0" netns="" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.355 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.355 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.373 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.373 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.373 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.379 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.379 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" HandleID="k8s-pod-network.c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Workload="localhost-k8s-calico--apiserver--7bff5578d8--wgmqb-eth0" Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.380 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.400999 containerd[1470]: 2025-01-13 21:23:28.382 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9" Jan 13 21:23:28.400999 containerd[1470]: time="2025-01-13T21:23:28.400955713Z" level=info msg="TearDown network for sandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" successfully" Jan 13 21:23:28.539649 containerd[1470]: time="2025-01-13T21:23:28.539593284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:28.539649 containerd[1470]: time="2025-01-13T21:23:28.539657686Z" level=info msg="RemovePodSandbox \"c7bf2357ec3cf86b01a7e8df5a04118170367f0f1edca8b2aacd6daad6f443a9\" returns successfully" Jan 13 21:23:28.540203 containerd[1470]: time="2025-01-13T21:23:28.540164707Z" level=info msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.572 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5431517f-80a8-45c9-b517-ab6eb8f8217a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234", Pod:"calico-apiserver-7bff5578d8-dbz9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15d2bf27578", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.572 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.572 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" iface="eth0" netns="" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.572 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.572 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.592 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.592 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.592 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.596 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.596 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.597 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.602133 containerd[1470]: 2025-01-13 21:23:28.599 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.602580 containerd[1470]: time="2025-01-13T21:23:28.602170049Z" level=info msg="TearDown network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" successfully" Jan 13 21:23:28.602580 containerd[1470]: time="2025-01-13T21:23:28.602199454Z" level=info msg="StopPodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" returns successfully" Jan 13 21:23:28.602815 containerd[1470]: time="2025-01-13T21:23:28.602775816Z" level=info msg="RemovePodSandbox for \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" Jan 13 21:23:28.602855 containerd[1470]: time="2025-01-13T21:23:28.602821733Z" level=info msg="Forcibly stopping sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\"" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.644 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0", GenerateName:"calico-apiserver-7bff5578d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5431517f-80a8-45c9-b517-ab6eb8f8217a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bff5578d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef4459c64d3151effb792e1929e8f67a559ff559d7ba005f9a01bf6b98fab234", Pod:"calico-apiserver-7bff5578d8-dbz9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15d2bf27578", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.645 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.645 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" iface="eth0" netns="" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.645 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.645 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.665 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.665 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.665 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.670 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.670 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" HandleID="k8s-pod-network.daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Workload="localhost-k8s-calico--apiserver--7bff5578d8--dbz9n-eth0" Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.671 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.675843 containerd[1470]: 2025-01-13 21:23:28.673 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350" Jan 13 21:23:28.675843 containerd[1470]: time="2025-01-13T21:23:28.675831370Z" level=info msg="TearDown network for sandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" successfully" Jan 13 21:23:28.719309 containerd[1470]: time="2025-01-13T21:23:28.719257759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:28.719383 containerd[1470]: time="2025-01-13T21:23:28.719328163Z" level=info msg="RemovePodSandbox \"daee8cc1708a5707cc28eca5de5858bfe44c653b3d9b0019480501346b321350\" returns successfully" Jan 13 21:23:28.719857 containerd[1470]: time="2025-01-13T21:23:28.719817249Z" level=info msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.752 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hch8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875", Pod:"coredns-6f6b679f8f-hch8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b4d392503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.753 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.753 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" iface="eth0" netns="" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.753 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.753 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.772 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.772 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.772 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.776 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.776 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.778 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.782777 containerd[1470]: 2025-01-13 21:23:28.780 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.783323 containerd[1470]: time="2025-01-13T21:23:28.782801015Z" level=info msg="TearDown network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" successfully" Jan 13 21:23:28.783323 containerd[1470]: time="2025-01-13T21:23:28.782827535Z" level=info msg="StopPodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" returns successfully" Jan 13 21:23:28.783323 containerd[1470]: time="2025-01-13T21:23:28.783264633Z" level=info msg="RemovePodSandbox for \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" Jan 13 21:23:28.783323 containerd[1470]: time="2025-01-13T21:23:28.783308706Z" level=info msg="Forcibly stopping sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\"" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.816 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hch8x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dc0cbdaa-90e4-4882-a40a-2f9f6a0b3e5a", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b3d8c91c98048095841239459b4aa3fbd8a54f2d27b30036daf378855c7d875", Pod:"coredns-6f6b679f8f-hch8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b4d392503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.816 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.816 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" iface="eth0" netns="" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.816 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.816 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.835 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.835 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.835 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.839 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.839 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" HandleID="k8s-pod-network.8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Workload="localhost-k8s-coredns--6f6b679f8f--hch8x-eth0" Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.841 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:28.845842 containerd[1470]: 2025-01-13 21:23:28.843 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd" Jan 13 21:23:28.846534 containerd[1470]: time="2025-01-13T21:23:28.845887424Z" level=info msg="TearDown network for sandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" successfully" Jan 13 21:23:28.939677 containerd[1470]: time="2025-01-13T21:23:28.939559431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:28.939677 containerd[1470]: time="2025-01-13T21:23:28.939656975Z" level=info msg="RemovePodSandbox \"8f4a5d28cdf408da74d118a19180bd3b8d52c68648d8ece693abbab3323efacd\" returns successfully" Jan 13 21:23:28.940465 containerd[1470]: time="2025-01-13T21:23:28.940438355Z" level=info msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.975 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r48mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea69c22-42f9-473c-8e07-d63b3f3fd2a2", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636", Pod:"csi-node-driver-r48mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e0b5de0705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.975 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.975 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" iface="eth0" netns="" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.975 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.975 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.995 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.995 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:28.995 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:29.052 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:29.052 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:29.054 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:29.058775 containerd[1470]: 2025-01-13 21:23:29.056 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.059257 containerd[1470]: time="2025-01-13T21:23:29.058812679Z" level=info msg="TearDown network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" successfully" Jan 13 21:23:29.059257 containerd[1470]: time="2025-01-13T21:23:29.058839509Z" level=info msg="StopPodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" returns successfully" Jan 13 21:23:29.061563 containerd[1470]: time="2025-01-13T21:23:29.061527059Z" level=info msg="RemovePodSandbox for \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" Jan 13 21:23:29.063723 containerd[1470]: time="2025-01-13T21:23:29.063690677Z" level=info msg="Forcibly stopping sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\"" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.096 [WARNING][5536] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r48mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea69c22-42f9-473c-8e07-d63b3f3fd2a2", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ff5b6ba5f2d9754ab9f38c3cb531c370c003cbfc96c48af38ce0201d0d7d636", Pod:"csi-node-driver-r48mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e0b5de0705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.096 [INFO][5536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.096 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" iface="eth0" netns="" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.096 [INFO][5536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.096 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.116 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.117 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.117 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.121 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.121 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" HandleID="k8s-pod-network.b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Workload="localhost-k8s-csi--node--driver--r48mv-eth0" Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.122 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:29.126969 containerd[1470]: 2025-01-13 21:23:29.124 [INFO][5536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225" Jan 13 21:23:29.127414 containerd[1470]: time="2025-01-13T21:23:29.126988039Z" level=info msg="TearDown network for sandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" successfully" Jan 13 21:23:29.228097 containerd[1470]: time="2025-01-13T21:23:29.227967353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:23:29.228097 containerd[1470]: time="2025-01-13T21:23:29.228052305Z" level=info msg="RemovePodSandbox \"b4986ef3ab71af20d3da84fec8101472b7756ac42d00f8841b91c467cf360225\" returns successfully" Jan 13 21:23:31.994939 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:53850.service - OpenSSH per-connection server daemon (10.0.0.1:53850). Jan 13 21:23:32.098824 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 53850 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:32.101484 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:32.107959 systemd-logind[1451]: New session 16 of user core. Jan 13 21:23:32.120537 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:23:32.258589 sshd[5553]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:32.263865 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:53850.service: Deactivated successfully. Jan 13 21:23:32.266078 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:23:32.266953 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:23:32.268239 systemd-logind[1451]: Removed session 16. Jan 13 21:23:37.270740 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:54820.service - OpenSSH per-connection server daemon (10.0.0.1:54820). Jan 13 21:23:37.306244 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 54820 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:37.308176 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:37.312478 systemd-logind[1451]: New session 17 of user core. Jan 13 21:23:37.321416 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:23:37.428359 sshd[5592]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:37.437939 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:54820.service: Deactivated successfully. Jan 13 21:23:37.439629 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:23:37.441040 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:23:37.442742 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). Jan 13 21:23:37.443501 systemd-logind[1451]: Removed session 17. Jan 13 21:23:37.474344 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:37.475852 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:37.479937 systemd-logind[1451]: New session 18 of user core. Jan 13 21:23:37.493413 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:23:37.713120 sshd[5607]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:37.725185 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:54834.service: Deactivated successfully. Jan 13 21:23:37.726971 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:23:37.728204 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:23:37.736547 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:54836.service - OpenSSH per-connection server daemon (10.0.0.1:54836). Jan 13 21:23:37.737409 systemd-logind[1451]: Removed session 18. Jan 13 21:23:37.765974 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 54836 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:37.767540 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:37.771602 systemd-logind[1451]: New session 19 of user core. Jan 13 21:23:37.781414 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:23:39.577253 sshd[5619]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:39.594834 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:54844.service - OpenSSH per-connection server daemon (10.0.0.1:54844). Jan 13 21:23:39.595422 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:54836.service: Deactivated successfully. Jan 13 21:23:39.600785 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:23:39.602383 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:23:39.604706 systemd-logind[1451]: Removed session 19. Jan 13 21:23:39.627070 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 54844 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:39.628782 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:39.633213 systemd-logind[1451]: New session 20 of user core. Jan 13 21:23:39.643503 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:23:39.865892 sshd[5639]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:39.875624 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:54844.service: Deactivated successfully. Jan 13 21:23:39.878263 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:23:39.879008 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:23:39.890610 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:54858.service - OpenSSH per-connection server daemon (10.0.0.1:54858). Jan 13 21:23:39.891640 systemd-logind[1451]: Removed session 20. Jan 13 21:23:39.918839 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 54858 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:39.920491 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:39.924874 systemd-logind[1451]: New session 21 of user core. Jan 13 21:23:39.939475 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:23:40.078120 sshd[5654]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:40.082210 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:54858.service: Deactivated successfully. Jan 13 21:23:40.084724 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:23:40.085595 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:23:40.086550 systemd-logind[1451]: Removed session 21. Jan 13 21:23:45.089074 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:57188.service - OpenSSH per-connection server daemon (10.0.0.1:57188). Jan 13 21:23:45.122127 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 57188 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:45.123785 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:45.127528 systemd-logind[1451]: New session 22 of user core. Jan 13 21:23:45.138415 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:23:45.240467 sshd[5677]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:45.244682 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:57188.service: Deactivated successfully. Jan 13 21:23:45.246399 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:23:45.246958 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:23:45.247719 systemd-logind[1451]: Removed session 22. Jan 13 21:23:50.253546 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:57194.service - OpenSSH per-connection server daemon (10.0.0.1:57194). Jan 13 21:23:50.289726 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 57194 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:50.291685 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:50.296873 systemd-logind[1451]: New session 23 of user core. Jan 13 21:23:50.306501 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:23:50.428035 sshd[5694]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:50.432420 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:57194.service: Deactivated successfully. Jan 13 21:23:50.434797 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:23:50.435695 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:23:50.436726 systemd-logind[1451]: Removed session 23. Jan 13 21:23:51.808233 kubelet[2494]: E0113 21:23:51.808170 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:52.082468 kubelet[2494]: E0113 21:23:52.082374 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:54.807648 kubelet[2494]: E0113 21:23:54.807603 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:55.441726 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:35028.service - OpenSSH per-connection server daemon (10.0.0.1:35028). Jan 13 21:23:55.495983 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 35028 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:23:55.498166 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:55.502834 systemd-logind[1451]: New session 24 of user core. Jan 13 21:23:55.510788 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:23:55.719574 sshd[5731]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:55.728144 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:35028.service: Deactivated successfully. Jan 13 21:23:55.731982 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:23:55.733609 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:23:55.735918 systemd-logind[1451]: Removed session 24. Jan 13 21:23:56.808096 kubelet[2494]: E0113 21:23:56.808054 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"