Jan 17 12:16:51.223237 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:16:51.223269 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:51.223280 kernel: BIOS-provided physical RAM map: Jan 17 12:16:51.223289 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:16:51.223297 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:16:51.223305 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:16:51.223316 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 17 12:16:51.223325 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 17 12:16:51.223334 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:16:51.223344 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:16:51.223350 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:16:51.223357 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:16:51.223363 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:16:51.223369 kernel: NX (Execute Disable) protection: active Jan 17 12:16:51.223377 kernel: APIC: Static calls initialized Jan 17 12:16:51.223386 kernel: SMBIOS 2.8 present. Jan 17 12:16:51.223393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 17 12:16:51.223399 kernel: Hypervisor detected: KVM Jan 17 12:16:51.223406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:16:51.223413 kernel: kvm-clock: using sched offset of 2838951518 cycles Jan 17 12:16:51.223420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:16:51.223427 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:16:51.223434 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:16:51.223441 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:16:51.223448 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 17 12:16:51.223457 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:16:51.223464 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:16:51.223471 kernel: Using GB pages for direct mapping Jan 17 12:16:51.223478 kernel: ACPI: Early table checksum verification disabled Jan 17 12:16:51.223485 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 17 12:16:51.223492 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223499 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223506 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223515 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 17 12:16:51.223522 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223529 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223536 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223543 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:16:51.223550 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 17 12:16:51.223557 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 17 12:16:51.223567 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 17 12:16:51.223576 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 17 12:16:51.223584 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 17 12:16:51.223591 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 17 12:16:51.223598 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 17 12:16:51.223605 kernel: No NUMA configuration found Jan 17 12:16:51.223612 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 17 12:16:51.223619 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 17 12:16:51.223629 kernel: Zone ranges: Jan 17 12:16:51.223636 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:16:51.223643 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 17 12:16:51.223650 kernel: Normal empty Jan 17 12:16:51.223667 kernel: Movable zone start for each node Jan 17 12:16:51.223677 kernel: Early memory node ranges Jan 17 12:16:51.223684 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:16:51.223691 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 17 12:16:51.223698 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 17 12:16:51.223708 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:16:51.223715 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:16:51.223723 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 12:16:51.223730 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:16:51.223737 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:16:51.223744 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:16:51.223751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:16:51.223759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:16:51.223766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:16:51.223775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:16:51.223783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:16:51.223790 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:16:51.223797 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:16:51.223804 kernel: TSC deadline timer available Jan 17 12:16:51.223811 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:16:51.223819 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:16:51.223826 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:16:51.223833 kernel: kvm-guest: setup PV sched yield Jan 17 12:16:51.223840 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:16:51.223850 kernel: Booting paravirtualized kernel on KVM Jan 17 12:16:51.223857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:16:51.223865 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:16:51.223872 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:16:51.223879 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:16:51.223887 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:16:51.223894 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:16:51.223901 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:16:51.223909 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:51.223919 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:16:51.223927 kernel: random: crng init done Jan 17 12:16:51.223934 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:16:51.223941 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:16:51.223948 kernel: Fallback order for Node 0: 0 Jan 17 12:16:51.223956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 17 12:16:51.223963 kernel: Policy zone: DMA32 Jan 17 12:16:51.223970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:16:51.223980 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 136900K reserved, 0K cma-reserved) Jan 17 12:16:51.223987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:16:51.223995 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:16:51.224002 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:16:51.224022 kernel: Dynamic Preempt: voluntary Jan 17 12:16:51.224030 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:16:51.224038 kernel: rcu: RCU event tracing is enabled. Jan 17 12:16:51.224045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:16:51.224053 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:16:51.224063 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:16:51.224071 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:16:51.224078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:16:51.224085 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:16:51.224093 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:16:51.224100 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:16:51.224107 kernel: Console: colour VGA+ 80x25 Jan 17 12:16:51.224114 kernel: printk: console [ttyS0] enabled Jan 17 12:16:51.224121 kernel: ACPI: Core revision 20230628 Jan 17 12:16:51.224131 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:16:51.224139 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:16:51.224146 kernel: x2apic enabled Jan 17 12:16:51.224153 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:16:51.224160 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:16:51.224168 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:16:51.224175 kernel: kvm-guest: setup PV IPIs Jan 17 12:16:51.224193 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:16:51.224200 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:16:51.224208 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:16:51.224216 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:16:51.224224 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:16:51.224235 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:16:51.224244 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:16:51.224252 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:16:51.224262 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:16:51.224273 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:16:51.224282 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:16:51.224290 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:16:51.224297 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:16:51.224305 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:16:51.224313 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:16:51.224325 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:16:51.224340 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:16:51.224351 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:16:51.224367 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:16:51.224377 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:16:51.224388 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:16:51.224396 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:16:51.224403 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:16:51.224411 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:16:51.224418 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:16:51.224426 kernel: landlock: Up and running. Jan 17 12:16:51.224435 kernel: SELinux: Initializing. Jan 17 12:16:51.224450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:16:51.224460 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:16:51.224472 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:16:51.224482 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:16:51.224490 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:16:51.224498 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:16:51.224505 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:16:51.224513 kernel: ... version: 0 Jan 17 12:16:51.224524 kernel: ... bit width: 48 Jan 17 12:16:51.224531 kernel: ... generic registers: 6 Jan 17 12:16:51.224539 kernel: ... value mask: 0000ffffffffffff Jan 17 12:16:51.224546 kernel: ... max period: 00007fffffffffff Jan 17 12:16:51.224554 kernel: ... fixed-purpose events: 0 Jan 17 12:16:51.224561 kernel: ... event mask: 000000000000003f Jan 17 12:16:51.224569 kernel: signal: max sigframe size: 1776 Jan 17 12:16:51.224577 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:16:51.224585 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:16:51.224592 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:16:51.224602 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:16:51.224610 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:16:51.224617 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:16:51.224625 kernel: smpboot: Max logical packages: 1 Jan 17 12:16:51.224633 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:16:51.224640 kernel: devtmpfs: initialized Jan 17 12:16:51.224648 kernel: x86/mm: Memory block size: 128MB Jan 17 12:16:51.224669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:16:51.224679 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:16:51.224691 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:16:51.224699 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:16:51.224707 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:16:51.224715 kernel: audit: type=2000 audit(1737116210.825:1): state=initialized audit_enabled=0 res=1 Jan 17 12:16:51.224722 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:16:51.224730 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:16:51.224737 kernel: cpuidle: using governor menu Jan 17 12:16:51.224745 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:16:51.224753 kernel: dca service started, version 1.12.1 Jan 17 12:16:51.224763 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:16:51.224771 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:16:51.224778 kernel: PCI: Using configuration type 1 for base access Jan 17 12:16:51.224786 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:16:51.224794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:16:51.224801 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:16:51.224809 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:16:51.224817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:16:51.224824 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:16:51.224834 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:16:51.224842 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:16:51.224849 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:16:51.224857 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:16:51.224865 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:16:51.224872 kernel: ACPI: Interpreter enabled Jan 17 12:16:51.224880 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:16:51.224887 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:16:51.224895 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:16:51.224905 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:16:51.224913 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:16:51.224921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:16:51.225195 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:16:51.225331 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:16:51.225487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:16:51.225501 kernel: PCI host bridge to bus 0000:00 Jan 17 12:16:51.225641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:16:51.225765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:16:51.225913 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:16:51.226052 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:16:51.226165 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:16:51.226276 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 12:16:51.226387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:16:51.226552 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:16:51.226711 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:16:51.226837 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 17 12:16:51.226959 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 17 12:16:51.227096 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 17 12:16:51.227223 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:16:51.227361 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:16:51.227484 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 12:16:51.227613 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 17 12:16:51.227779 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 17 12:16:51.227965 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:16:51.228142 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:16:51.228328 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 17 12:16:51.228509 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 17 12:16:51.228697 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:16:51.228837 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 17 12:16:51.228961 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 17 12:16:51.229103 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 17 12:16:51.229229 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 17 12:16:51.229363 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:16:51.229495 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:16:51.229625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:16:51.229762 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 17 12:16:51.229900 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 17 12:16:51.230113 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:16:51.230276 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:16:51.230298 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:16:51.230308 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:16:51.230318 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:16:51.230328 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:16:51.230339 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:16:51.230349 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:16:51.230359 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:16:51.230370 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:16:51.230380 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:16:51.230395 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:16:51.230405 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:16:51.230415 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:16:51.230425 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:16:51.230435 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:16:51.230444 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:16:51.230454 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:16:51.230465 kernel: iommu: Default domain type: Translated Jan 17 12:16:51.230475 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:16:51.230489 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:16:51.230499 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:16:51.230509 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:16:51.230519 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 17 12:16:51.230703 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:16:51.230876 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:16:51.231110 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:16:51.231127 kernel: vgaarb: loaded Jan 17 12:16:51.231137 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:16:51.231153 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:16:51.231163 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:16:51.231174 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:16:51.231184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:16:51.231194 kernel: pnp: PnP ACPI init Jan 17 12:16:51.231378 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:16:51.231396 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:16:51.231408 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:16:51.231424 kernel: NET: Registered PF_INET protocol family Jan 17 12:16:51.231435 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:16:51.231447 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:16:51.231457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:16:51.231468 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:16:51.231477 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:16:51.231488 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:16:51.231498 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:16:51.231509 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:16:51.231525 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:16:51.231535 kernel: NET: Registered PF_XDP protocol family Jan 17 12:16:51.231706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:16:51.231861 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:16:51.232036 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:16:51.232188 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:16:51.232323 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:16:51.232451 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 12:16:51.232468 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:16:51.232476 kernel: Initialise system trusted keyrings Jan 17 12:16:51.232484 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:16:51.232492 kernel: Key type asymmetric registered Jan 17 12:16:51.232500 kernel: Asymmetric key parser 'x509' registered Jan 17 12:16:51.232508 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:16:51.232515 kernel: io scheduler mq-deadline registered Jan 17 12:16:51.232523 kernel: io scheduler kyber registered Jan 17 12:16:51.232531 kernel: io scheduler bfq registered Jan 17 12:16:51.232541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:16:51.232549 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:16:51.232557 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:16:51.232565 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:16:51.232573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:16:51.232581 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:16:51.232589 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:16:51.232597 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:16:51.232604 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:16:51.232615 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:16:51.232814 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:16:51.232945 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:16:51.233098 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:16:50 UTC (1737116210) Jan 17 12:16:51.233219 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:16:51.233229 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:16:51.233237 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:16:51.233245 kernel: Segment Routing with IPv6 Jan 17 12:16:51.233258 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:16:51.233266 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:16:51.233273 kernel: Key type dns_resolver registered Jan 17 12:16:51.233281 kernel: IPI shorthand broadcast: enabled Jan 17 12:16:51.233289 kernel: sched_clock: Marking stable (932002592, 108580088)->(1063351002, -22768322) Jan 17 12:16:51.233296 kernel: registered taskstats version 1 Jan 17 12:16:51.233304 kernel: Loading compiled-in X.509 certificates Jan 17 12:16:51.233312 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:16:51.233320 kernel: Key type .fscrypt registered Jan 17 12:16:51.233330 kernel: Key type fscrypt-provisioning registered Jan 17 12:16:51.233338 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:16:51.233346 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:16:51.233353 kernel: ima: No architecture policies found Jan 17 12:16:51.233361 kernel: clk: Disabling unused clocks Jan 17 12:16:51.233369 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:16:51.233376 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:16:51.233385 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:16:51.233396 kernel: Run /init as init process Jan 17 12:16:51.233410 kernel: with arguments: Jan 17 12:16:51.233420 kernel: /init Jan 17 12:16:51.233427 kernel: with environment: Jan 17 12:16:51.233435 kernel: HOME=/ Jan 17 12:16:51.233442 kernel: TERM=linux Jan 17 12:16:51.233450 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:16:51.233464 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:16:51.233474 systemd[1]: Detected virtualization kvm. Jan 17 12:16:51.233485 systemd[1]: Detected architecture x86-64. Jan 17 12:16:51.233493 systemd[1]: Running in initrd. Jan 17 12:16:51.233501 systemd[1]: No hostname configured, using default hostname. Jan 17 12:16:51.233509 systemd[1]: Hostname set to . Jan 17 12:16:51.233517 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:16:51.233526 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:16:51.233534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:51.233543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:51.233567 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:16:51.233597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:16:51.233608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:16:51.233617 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:16:51.233627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:16:51.233638 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:16:51.233647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:51.233684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:16:51.233693 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:16:51.233701 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:16:51.233710 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:16:51.233719 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:16:51.233736 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:16:51.233756 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:16:51.233772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:16:51.233782 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:16:51.233791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:51.233800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:51.233808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:51.233816 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:16:51.233825 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:16:51.233836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:16:51.233844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:16:51.233852 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:16:51.233861 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:16:51.233869 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:16:51.233877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:51.233886 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:16:51.233894 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:51.233902 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:16:51.233938 systemd-journald[193]: Collecting audit messages is disabled. Jan 17 12:16:51.233960 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:16:51.233972 systemd-journald[193]: Journal started Jan 17 12:16:51.233993 systemd-journald[193]: Runtime Journal (/run/log/journal/e7b502fe5df642ca989f6349764985e2) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:16:51.218342 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:16:51.257916 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:16:51.257935 kernel: Bridge firewalling registered Jan 17 12:16:51.256993 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:16:51.259793 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:16:51.261290 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:51.263848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:51.266578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:16:51.284270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:51.287556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:16:51.290842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:16:51.296219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:16:51.324495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:51.325966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:51.327722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:16:51.341247 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:16:51.343541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:51.347681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:16:51.353901 dracut-cmdline[227]: dracut-dracut-053 Jan 17 12:16:51.357900 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:51.395270 systemd-resolved[234]: Positive Trust Anchors: Jan 17 12:16:51.395287 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:16:51.395317 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:16:51.398204 systemd-resolved[234]: Defaulting to hostname 'linux'. Jan 17 12:16:51.399598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:16:51.405039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:16:51.460060 kernel: SCSI subsystem initialized Jan 17 12:16:51.495038 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:16:51.509071 kernel: iscsi: registered transport (tcp) Jan 17 12:16:51.537078 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:16:51.537166 kernel: QLogic iSCSI HBA Driver Jan 17 12:16:51.591621 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:16:51.597344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:16:51.633069 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:16:51.633145 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:16:51.633157 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:16:51.679069 kernel: raid6: avx2x4 gen() 20230 MB/s Jan 17 12:16:51.696052 kernel: raid6: avx2x2 gen() 19729 MB/s Jan 17 12:16:51.713485 kernel: raid6: avx2x1 gen() 16491 MB/s Jan 17 12:16:51.713564 kernel: raid6: using algorithm avx2x4 gen() 20230 MB/s Jan 17 12:16:51.734459 kernel: raid6: .... xor() 6263 MB/s, rmw enabled Jan 17 12:16:51.734533 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:16:51.761067 kernel: xor: automatically using best checksumming function avx Jan 17 12:16:51.986077 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:16:52.002555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:16:52.057364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:16:52.077782 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 17 12:16:52.087991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:16:52.106219 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:16:52.124537 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 17 12:16:52.163307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:16:52.170177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:16:52.256694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:52.264186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:16:52.281702 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:16:52.285102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:16:52.286824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:52.296098 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:16:52.358490 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:16:52.358687 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:16:52.358700 kernel: GPT:9289727 != 19775487 Jan 17 12:16:52.358710 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:16:52.358721 kernel: GPT:9289727 != 19775487 Jan 17 12:16:52.358732 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:16:52.358757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:16:52.288855 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:16:52.299194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:16:52.315975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:16:52.356042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:16:52.356109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:52.363043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:52.364495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:16:52.364552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:52.366419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:52.383080 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:16:52.383115 kernel: libata version 3.00 loaded. Jan 17 12:16:52.385142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:52.396533 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:16:52.479686 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:16:52.479709 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:16:52.479921 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:16:52.480162 kernel: scsi host0: ahci Jan 17 12:16:52.480369 kernel: scsi host1: ahci Jan 17 12:16:52.480567 kernel: scsi host2: ahci Jan 17 12:16:52.480774 kernel: scsi host3: ahci Jan 17 12:16:52.480978 kernel: scsi host4: ahci Jan 17 12:16:52.481197 kernel: scsi host5: ahci Jan 17 12:16:52.481403 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (456) Jan 17 12:16:52.481420 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 17 12:16:52.481434 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 17 12:16:52.481449 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 17 12:16:52.481463 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 17 12:16:52.481477 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jan 17 12:16:52.481492 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 17 12:16:52.481506 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 17 12:16:52.481524 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:16:52.481538 kernel: AES CTR mode by8 optimization enabled Jan 17 12:16:52.490415 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:16:52.529310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:16:52.550164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:16:52.552932 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:52.558245 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:16:52.558345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:16:52.579239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:16:52.599380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:52.621583 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:52.788345 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:16:52.788428 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:16:52.788442 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:16:52.788454 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:16:52.790356 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:16:52.790441 kernel: ata3.00: applying bridge limits Jan 17 12:16:52.790456 kernel: ata3.00: configured for UDMA/100 Jan 17 12:16:52.793047 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:16:52.795041 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:16:52.797050 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:16:52.846047 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:16:52.867831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:16:52.867852 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:16:52.956707 disk-uuid[554]: Primary Header is updated. Jan 17 12:16:52.956707 disk-uuid[554]: Secondary Entries is updated. Jan 17 12:16:52.956707 disk-uuid[554]: Secondary Header is updated. Jan 17 12:16:52.984810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:16:52.988039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:16:54.007668 disk-uuid[577]: The operation has completed successfully. Jan 17 12:16:54.009214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:16:54.039757 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:16:54.039875 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:16:54.060177 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:16:54.065433 sh[591]: Success Jan 17 12:16:54.085065 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:16:54.123627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:16:54.132507 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:16:54.135639 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:16:54.169483 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:16:54.169513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:54.169524 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:16:54.170506 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:16:54.171270 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:16:54.175908 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:16:54.184143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:16:54.198209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:16:54.200414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:16:54.210553 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:54.210607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:54.210618 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:16:54.214032 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:16:54.228807 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:16:54.230663 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:54.261489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:16:54.267281 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:16:54.340963 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:16:54.358233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:16:54.401518 systemd-networkd[773]: lo: Link UP Jan 17 12:16:54.401531 systemd-networkd[773]: lo: Gained carrier Jan 17 12:16:54.403670 systemd-networkd[773]: Enumeration completed Jan 17 12:16:54.403801 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:16:54.405766 systemd[1]: Reached target network.target - Network. Jan 17 12:16:54.406231 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:54.406236 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:16:54.409522 systemd-networkd[773]: eth0: Link UP Jan 17 12:16:54.409528 systemd-networkd[773]: eth0: Gained carrier Jan 17 12:16:54.409544 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:54.425087 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:16:54.476431 ignition[699]: Ignition 2.19.0 Jan 17 12:16:54.476448 ignition[699]: Stage: fetch-offline Jan 17 12:16:54.476508 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:54.476526 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:54.476728 ignition[699]: parsed url from cmdline: "" Jan 17 12:16:54.476733 ignition[699]: no config URL provided Jan 17 12:16:54.476741 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:16:54.476755 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:16:54.476797 ignition[699]: op(1): [started] loading QEMU firmware config module Jan 17 12:16:54.476805 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:16:54.487187 ignition[699]: op(1): [finished] loading QEMU firmware config module Jan 17 12:16:54.533647 ignition[699]: parsing config with SHA512: 05f3470bfc8337ffd93fd029bc304f5c3598ca15f0e2b4e58d9c919b57f6a0072291f4755fd7e8b8a8547758283677bcdf966b3820f966099030cc589f61312d Jan 17 12:16:54.538791 unknown[699]: fetched base config from "system" Jan 17 12:16:54.538810 unknown[699]: fetched user config from "qemu" Jan 17 12:16:54.539292 ignition[699]: fetch-offline: fetch-offline passed Jan 17 12:16:54.539389 ignition[699]: Ignition finished successfully Jan 17 12:16:54.542461 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:16:54.544998 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:16:54.565408 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:16:54.590821 ignition[786]: Ignition 2.19.0 Jan 17 12:16:54.590833 ignition[786]: Stage: kargs Jan 17 12:16:54.591049 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:54.591062 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:54.591944 ignition[786]: kargs: kargs passed Jan 17 12:16:54.596070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:16:54.591991 ignition[786]: Ignition finished successfully Jan 17 12:16:54.604173 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:16:54.625548 ignition[794]: Ignition 2.19.0 Jan 17 12:16:54.625559 ignition[794]: Stage: disks Jan 17 12:16:54.625749 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:54.625761 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:54.629278 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:16:54.626569 ignition[794]: disks: disks passed Jan 17 12:16:54.631271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:16:54.626617 ignition[794]: Ignition finished successfully Jan 17 12:16:54.634286 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:16:54.636529 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:16:54.637884 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:16:54.639668 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:16:54.658379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:16:54.677274 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:16:54.726949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:16:54.740143 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:16:54.855041 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:16:54.855571 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:16:54.858239 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:16:54.875148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:16:54.879215 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:16:54.880457 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:16:54.880506 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:16:54.880534 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:16:54.888686 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:16:54.890267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:16:54.901037 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 17 12:16:54.901096 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:54.903456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:54.903539 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:16:54.908065 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:16:54.910724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:16:54.943723 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:16:54.950027 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:16:54.955359 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:16:54.960141 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:16:55.076254 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:16:55.089261 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:16:55.093127 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:16:55.098032 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:55.119705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:16:55.130261 ignition[925]: INFO : Ignition 2.19.0 Jan 17 12:16:55.130261 ignition[925]: INFO : Stage: mount Jan 17 12:16:55.132138 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:55.132138 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:55.135294 ignition[925]: INFO : mount: mount passed Jan 17 12:16:55.136164 ignition[925]: INFO : Ignition finished successfully Jan 17 12:16:55.139128 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:16:55.153310 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:16:55.168374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:16:55.175356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:16:55.184042 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 17 12:16:55.188171 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:55.188204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:55.188219 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:16:55.194043 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:16:55.196043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:16:55.226465 ignition[954]: INFO : Ignition 2.19.0 Jan 17 12:16:55.226465 ignition[954]: INFO : Stage: files Jan 17 12:16:55.228293 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:55.228293 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:55.230898 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:16:55.232891 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:16:55.232891 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:16:55.236422 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:16:55.238076 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:16:55.240025 unknown[954]: wrote ssh authorized keys file for user: core Jan 17 12:16:55.241421 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:16:55.243165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:16:55.243165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:16:55.286583 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:16:55.440690 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:16:55.440690 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:16:55.445158 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:16:55.447081 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:16:55.449229 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:16:55.451222 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:16:55.453330 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:16:55.455093 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:16:55.457132 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:16:55.459282 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:16:55.461547 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:16:55.463590 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:16:55.466903 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:16:55.469757 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:16:55.472262 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:16:55.826731 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:16:56.002246 systemd-networkd[773]: eth0: Gained IPv6LL Jan 17 12:16:56.282539 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:16:56.282539 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 12:16:56.287242 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:16:56.311904 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:16:56.318945 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:16:56.321030 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:16:56.321030 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:16:56.321030 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:16:56.321030 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:16:56.321030 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:16:56.321030 ignition[954]: INFO : files: files passed Jan 17 12:16:56.321030 ignition[954]: INFO : Ignition finished successfully Jan 17 12:16:56.322761 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:16:56.335158 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:16:56.337493 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:16:56.339727 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:16:56.339846 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:16:56.356411 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:16:56.359600 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:56.361725 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:56.365060 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:56.363156 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:16:56.365320 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:16:56.377257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:16:56.407066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:16:56.407232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:16:56.410411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:16:56.411257 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:16:56.411677 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:16:56.412794 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:16:56.432399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:16:56.443266 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:16:56.455252 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:16:56.456694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:56.459141 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:16:56.461400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:16:56.461580 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:16:56.464113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:16:56.465867 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:16:56.468140 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:16:56.470372 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:16:56.472560 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:16:56.476000 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:16:56.478314 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:16:56.480817 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:16:56.483030 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:16:56.485418 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:16:56.487372 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:16:56.487557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:16:56.490282 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:16:56.491669 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:56.493999 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:16:56.494139 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:56.496691 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:16:56.496837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:16:56.499492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:16:56.499619 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:16:56.501688 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:16:56.504767 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:16:56.508095 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:56.509947 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:16:56.512046 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:16:56.514336 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:16:56.514455 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:16:56.516346 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:16:56.516444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:16:56.518688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:16:56.518848 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:16:56.521603 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:16:56.521716 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:16:56.536352 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:16:56.539412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:16:56.540399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:16:56.540534 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:56.542782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:16:56.542922 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:16:56.549850 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:16:56.550130 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:16:56.554580 ignition[1008]: INFO : Ignition 2.19.0 Jan 17 12:16:56.568795 ignition[1008]: INFO : Stage: umount Jan 17 12:16:56.568795 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:56.568795 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:16:56.568795 ignition[1008]: INFO : umount: umount passed Jan 17 12:16:56.568795 ignition[1008]: INFO : Ignition finished successfully Jan 17 12:16:56.559978 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:16:56.560153 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:16:56.569083 systemd[1]: Stopped target network.target - Network. Jan 17 12:16:56.570692 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:16:56.570767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:16:56.572976 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:16:56.573139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:16:56.575471 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:16:56.575554 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:16:56.577555 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:16:56.577621 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:16:56.580090 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:16:56.582481 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:16:56.586241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:16:56.586902 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:16:56.587042 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:16:56.588105 systemd-networkd[773]: eth0: DHCPv6 lease lost Jan 17 12:16:56.590429 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:16:56.590501 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:16:56.592749 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:16:56.592951 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:16:56.595675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:16:56.595767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:56.608290 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:16:56.610599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:16:56.610709 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:16:56.613044 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:16:56.613115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:56.615146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:16:56.615210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:56.617666 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:16:56.653244 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:16:56.654456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:16:56.657787 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:16:56.658914 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:16:56.661715 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:16:56.662905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:56.665306 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:16:56.665359 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:56.668578 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:16:56.668643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:16:56.672064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:16:56.672127 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:16:56.675460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:16:56.675538 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:56.731167 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:16:56.732488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:16:56.733831 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:56.749416 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:16:56.749480 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:16:56.752422 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:16:56.752478 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:56.758875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:16:56.758938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:56.763025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:16:56.764256 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:16:56.781793 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:16:56.781939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:16:56.785149 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:16:56.787497 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:16:56.787568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:16:56.805185 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:16:56.812940 systemd[1]: Switching root. Jan 17 12:16:56.840896 systemd-journald[193]: Journal stopped Jan 17 12:16:58.183730 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 17 12:16:58.183808 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:16:58.183829 kernel: SELinux: policy capability open_perms=1 Jan 17 12:16:58.183840 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:16:58.183855 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:16:58.183867 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:16:58.183878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:16:58.183889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:16:58.183913 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:16:58.183931 kernel: audit: type=1403 audit(1737116217.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:16:58.183944 systemd[1]: Successfully loaded SELinux policy in 41.575ms. Jan 17 12:16:58.183965 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.726ms. Jan 17 12:16:58.183979 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:16:58.183994 systemd[1]: Detected virtualization kvm. Jan 17 12:16:58.184020 systemd[1]: Detected architecture x86-64. Jan 17 12:16:58.184033 systemd[1]: Detected first boot. Jan 17 12:16:58.184045 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:16:58.184057 zram_generator::config[1051]: No configuration found. Jan 17 12:16:58.184070 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:16:58.184082 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:16:58.184094 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:16:58.184117 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:16:58.184133 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:16:58.184145 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:16:58.184157 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:16:58.184169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:16:58.184181 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:16:58.184193 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:16:58.184205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:16:58.184227 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:16:58.184240 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:58.184252 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:58.184264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:16:58.184276 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:16:58.184289 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:16:58.184301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:16:58.184313 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:16:58.184326 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:58.184341 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:16:58.184353 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:16:58.184365 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:16:58.184377 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:16:58.184391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:58.184403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:16:58.184416 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:16:58.184428 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:16:58.184443 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:16:58.184455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:16:58.184476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:58.184489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:58.184501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:58.184514 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:16:58.184526 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:16:58.184538 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:16:58.184551 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:16:58.184566 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:58.184578 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:16:58.184590 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:16:58.184603 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:16:58.184615 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:16:58.184628 systemd[1]: Reached target machines.target - Containers. Jan 17 12:16:58.184640 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:16:58.184652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:16:58.184669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:16:58.184682 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:16:58.184694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:16:58.184706 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:16:58.184718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:16:58.184730 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:16:58.184742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:16:58.184755 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:16:58.184767 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:16:58.184782 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:16:58.184800 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:16:58.184813 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:16:58.184825 kernel: loop: module loaded Jan 17 12:16:58.184836 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:16:58.184849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:16:58.184861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:16:58.184873 kernel: ACPI: bus type drm_connector registered Jan 17 12:16:58.184884 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:16:58.184899 kernel: fuse: init (API version 7.39) Jan 17 12:16:58.184932 systemd-journald[1121]: Collecting audit messages is disabled. Jan 17 12:16:58.184954 systemd-journald[1121]: Journal started Jan 17 12:16:58.184977 systemd-journald[1121]: Runtime Journal (/run/log/journal/e7b502fe5df642ca989f6349764985e2) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:16:57.937095 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:16:57.960164 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:16:57.960685 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:16:58.187063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:16:58.189238 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:16:58.189298 systemd[1]: Stopped verity-setup.service. Jan 17 12:16:58.193032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:58.200033 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:16:58.201752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:16:58.202976 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:16:58.204239 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:16:58.205351 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:16:58.206591 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:16:58.207854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:16:58.209151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:16:58.210646 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:58.212312 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:16:58.212531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:16:58.214304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:16:58.214529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:16:58.216087 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:16:58.216286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:16:58.217711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:16:58.217914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:16:58.219492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:16:58.219698 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:16:58.221271 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:16:58.221514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:16:58.223066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:58.224560 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:16:58.226534 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:16:58.242764 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:16:58.251088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:16:58.253395 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:16:58.254657 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:16:58.254683 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:16:58.256817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:16:58.259193 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:16:58.261620 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:16:58.262921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:16:58.266528 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:16:58.270867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:16:58.272588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:16:58.273864 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:16:58.275223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:16:58.280740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:16:58.284330 systemd-journald[1121]: Time spent on flushing to /var/log/journal/e7b502fe5df642ca989f6349764985e2 is 16.909ms for 950 entries. Jan 17 12:16:58.284330 systemd-journald[1121]: System Journal (/var/log/journal/e7b502fe5df642ca989f6349764985e2) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:16:58.315765 systemd-journald[1121]: Received client request to flush runtime journal. Jan 17 12:16:58.315801 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:16:58.286184 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:16:58.291403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:16:58.294348 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:16:58.296896 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:16:58.298472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:16:58.305150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:58.307668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:16:58.310681 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:16:58.321451 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:16:58.324515 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:16:58.326834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:16:58.338253 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:58.342568 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:16:58.343599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:16:58.352113 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 17 12:16:58.352134 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 17 12:16:58.358311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:16:58.373394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:16:58.375778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:16:58.376652 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:16:58.378040 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:16:58.406181 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:16:58.417393 kernel: loop2: detected capacity change from 0 to 210664 Jan 17 12:16:58.415319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:16:58.436679 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:16:58.436705 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:16:58.443335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:58.463038 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:16:58.476035 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:16:58.493040 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:16:58.499540 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:16:58.500177 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 17 12:16:58.505207 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:16:58.505222 systemd[1]: Reloading... Jan 17 12:16:58.556052 zram_generator::config[1219]: No configuration found. Jan 17 12:16:58.749487 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:16:58.815502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:16:58.875420 systemd[1]: Reloading finished in 369 ms. Jan 17 12:16:58.927300 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:16:58.929143 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:16:58.946351 systemd[1]: Starting ensure-sysext.service... Jan 17 12:16:58.948917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:16:58.959613 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:16:58.959634 systemd[1]: Reloading... Jan 17 12:16:58.988499 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:16:58.988936 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:16:58.990224 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:16:58.990686 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 17 12:16:58.990791 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 17 12:16:59.002074 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:16:59.002093 systemd-tmpfiles[1258]: Skipping /boot Jan 17 12:16:59.024123 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:16:59.024143 systemd-tmpfiles[1258]: Skipping /boot Jan 17 12:16:59.035035 zram_generator::config[1291]: No configuration found. Jan 17 12:16:59.162947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:16:59.224131 systemd[1]: Reloading finished in 264 ms. Jan 17 12:16:59.245718 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:16:59.264729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:16:59.275185 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:16:59.278124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:16:59.280843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:16:59.286281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:16:59.289115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:16:59.293628 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:16:59.298350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:59.298601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:16:59.305747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:16:59.314421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:16:59.318281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:16:59.319657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:16:59.324350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:16:59.325607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:59.327626 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:16:59.329862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:16:59.330147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:16:59.332489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:16:59.332773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:16:59.335500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:16:59.335753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:16:59.337052 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jan 17 12:16:59.346219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:16:59.347137 augenrules[1352]: No rules Jan 17 12:16:59.346459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:16:59.353423 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:16:59.355720 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:16:59.357592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:16:59.364776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:16:59.382723 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:16:59.395682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:16:59.399704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:59.399918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:16:59.409259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:16:59.412857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:16:59.415856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:16:59.418459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:16:59.419841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:16:59.429456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:16:59.430811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:59.432468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:16:59.435789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:16:59.436078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:16:59.439532 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:16:59.439796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:16:59.443569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:16:59.443806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:16:59.447097 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:16:59.447327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:16:59.457117 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1368) Jan 17 12:16:59.474068 systemd[1]: Finished ensure-sysext.service. Jan 17 12:16:59.492376 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:16:59.497571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:16:59.497647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:16:59.509221 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:16:59.513566 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:16:59.526321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:16:59.532372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:16:59.564977 systemd-resolved[1328]: Positive Trust Anchors: Jan 17 12:16:59.565042 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:16:59.565090 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:16:59.570171 systemd-resolved[1328]: Defaulting to hostname 'linux'. Jan 17 12:16:59.570912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:16:59.574233 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:16:59.577156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:16:59.580661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:16:59.585027 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:16:59.593700 systemd-networkd[1386]: lo: Link UP Jan 17 12:16:59.593713 systemd-networkd[1386]: lo: Gained carrier Jan 17 12:16:59.595725 systemd-networkd[1386]: Enumeration completed Jan 17 12:16:59.596192 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:59.596210 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:16:59.596994 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:16:59.597194 systemd-networkd[1386]: eth0: Link UP Jan 17 12:16:59.597204 systemd-networkd[1386]: eth0: Gained carrier Jan 17 12:16:59.597218 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:59.598671 systemd[1]: Reached target network.target - Network. Jan 17 12:16:59.603731 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:16:59.603988 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:16:59.606207 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:16:59.611285 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:16:59.613081 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:16:59.629793 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:16:59.631455 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:00.885621 systemd-resolved[1328]: Clock change detected. Flushing caches. Jan 17 12:17:00.885835 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:17:00.886559 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2025-01-17 12:17:00.885552 UTC. Jan 17 12:17:00.904684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:17:00.925150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:00.932748 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:01.000153 kernel: kvm_amd: TSC scaling supported Jan 17 12:17:01.000208 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:17:01.000250 kernel: kvm_amd: Nested Paging enabled Jan 17 12:17:01.000263 kernel: kvm_amd: LBR virtualization supported Jan 17 12:17:01.000912 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:17:01.000988 kernel: kvm_amd: Virtual GIF supported Jan 17 12:17:01.022502 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:17:01.051130 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:01.076866 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:01.078531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:01.086113 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:01.120155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:01.121815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:01.122980 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:01.124209 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:01.125546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:01.127206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:01.128428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:01.129734 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:01.131057 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:01.131092 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:01.132042 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:01.133844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:01.136724 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:17:01.151420 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:17:01.154509 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:17:01.156581 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:17:01.158123 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:01.159425 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:01.160740 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:01.160780 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:01.162195 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:17:01.164862 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:17:01.167802 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:01.169760 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:17:01.173827 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:17:01.174981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:17:01.176877 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:17:01.178494 jq[1429]: false Jan 17 12:17:01.181790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:17:01.186846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:17:01.194850 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:17:01.199564 extend-filesystems[1430]: Found loop3 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found loop4 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found loop5 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found sr0 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda1 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda2 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda3 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found usr Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda4 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda6 Jan 17 12:17:01.200795 extend-filesystems[1430]: Found vda7 Jan 17 12:17:01.211413 extend-filesystems[1430]: Found vda9 Jan 17 12:17:01.211413 extend-filesystems[1430]: Checking size of /dev/vda9 Jan 17 12:17:01.207257 dbus-daemon[1428]: [system] SELinux support is enabled Jan 17 12:17:01.203220 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:17:01.218196 extend-filesystems[1430]: Resized partition /dev/vda9 Jan 17 12:17:01.228815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1388) Jan 17 12:17:01.228867 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:17:01.210184 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:17:01.229007 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:17:01.213527 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:17:01.216307 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:17:01.241304 jq[1449]: true Jan 17 12:17:01.220779 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:17:01.223552 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:17:01.232068 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:17:01.245362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:17:01.245685 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:17:01.246125 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:17:01.246405 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:17:01.254853 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:17:01.253016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:17:01.253326 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:17:01.280360 update_engine[1446]: I20250117 12:17:01.279876 1446 main.cc:92] Flatcar Update Engine starting Jan 17 12:17:01.266373 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:17:01.281601 jq[1455]: true Jan 17 12:17:01.282812 update_engine[1446]: I20250117 12:17:01.282618 1446 update_check_scheduler.cc:74] Next update check in 9m29s Jan 17 12:17:01.287696 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:17:01.287696 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:17:01.287696 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:17:01.294819 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jan 17 12:17:01.298328 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:17:01.298682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:17:01.299851 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:17:01.299876 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:17:01.301092 systemd-logind[1439]: New seat seat0. Jan 17 12:17:01.304387 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:17:01.319924 tar[1453]: linux-amd64/helm Jan 17 12:17:01.320482 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:17:01.324356 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:17:01.324879 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:17:01.327158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:17:01.327374 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:17:01.337433 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:17:01.353684 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:01.355851 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:17:01.359741 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:17:01.373444 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:17:01.451253 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:17:01.481831 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:17:01.488034 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:17:01.500612 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:17:01.500895 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:17:01.506033 containerd[1456]: time="2025-01-17T12:17:01.505935458Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:17:01.510110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:17:01.523562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:17:01.532584 containerd[1456]: time="2025-01-17T12:17:01.530986528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.533552 containerd[1456]: time="2025-01-17T12:17:01.533491375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:01.534132 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:17:01.534823 containerd[1456]: time="2025-01-17T12:17:01.534439834Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:17:01.534823 containerd[1456]: time="2025-01-17T12:17:01.534487704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:17:01.534823 containerd[1456]: time="2025-01-17T12:17:01.534747832Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:17:01.534823 containerd[1456]: time="2025-01-17T12:17:01.534766196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.534946 containerd[1456]: time="2025-01-17T12:17:01.534840986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:01.534946 containerd[1456]: time="2025-01-17T12:17:01.534854552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535077991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535097848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535109660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535118707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535215619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535502868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535684999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:01.535772 containerd[1456]: time="2025-01-17T12:17:01.535703714Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:17:01.535958 containerd[1456]: time="2025-01-17T12:17:01.535830231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:17:01.535958 containerd[1456]: time="2025-01-17T12:17:01.535912345Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:17:01.537630 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:17:01.539120 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.543819915Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.543883093Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.543904283Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.543921666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.543943617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544161906Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544401896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544540787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544557769Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544571404Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544585270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544598455Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544614024Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.544710 containerd[1456]: time="2025-01-17T12:17:01.544627720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.545031 containerd[1456]: time="2025-01-17T12:17:01.544643269Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.545111 containerd[1456]: time="2025-01-17T12:17:01.545093093Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.545162 containerd[1456]: time="2025-01-17T12:17:01.545150330Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.545218 containerd[1456]: time="2025-01-17T12:17:01.545205654Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:17:01.545287 containerd[1456]: time="2025-01-17T12:17:01.545269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545346 containerd[1456]: time="2025-01-17T12:17:01.545332762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545412 containerd[1456]: time="2025-01-17T12:17:01.545398465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545466 containerd[1456]: time="2025-01-17T12:17:01.545454240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545525 containerd[1456]: time="2025-01-17T12:17:01.545512449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545578 containerd[1456]: time="2025-01-17T12:17:01.545566671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545636 containerd[1456]: time="2025-01-17T12:17:01.545622305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545711 containerd[1456]: time="2025-01-17T12:17:01.545698738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545761 containerd[1456]: time="2025-01-17T12:17:01.545750235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545810 containerd[1456]: time="2025-01-17T12:17:01.545798646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545872 containerd[1456]: time="2025-01-17T12:17:01.545858809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.545944 containerd[1456]: time="2025-01-17T12:17:01.545926405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.546029 containerd[1456]: time="2025-01-17T12:17:01.546010243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.546158 containerd[1456]: time="2025-01-17T12:17:01.546137441Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:17:01.546225 containerd[1456]: time="2025-01-17T12:17:01.546212883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.546273 containerd[1456]: time="2025-01-17T12:17:01.546262075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.546333 containerd[1456]: time="2025-01-17T12:17:01.546320865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:17:01.546441 containerd[1456]: time="2025-01-17T12:17:01.546426814Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:17:01.546579 containerd[1456]: time="2025-01-17T12:17:01.546562067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:17:01.546632 containerd[1456]: time="2025-01-17T12:17:01.546620367Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:17:01.546696 containerd[1456]: time="2025-01-17T12:17:01.546682874Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:17:01.546741 containerd[1456]: time="2025-01-17T12:17:01.546729702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.546805 containerd[1456]: time="2025-01-17T12:17:01.546788041Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:17:01.546870 containerd[1456]: time="2025-01-17T12:17:01.546856129Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:17:01.546928 containerd[1456]: time="2025-01-17T12:17:01.546914739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:17:01.547290 containerd[1456]: time="2025-01-17T12:17:01.547239338Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:17:01.547477 containerd[1456]: time="2025-01-17T12:17:01.547463007Z" level=info msg="Connect containerd service" Jan 17 12:17:01.547556 containerd[1456]: time="2025-01-17T12:17:01.547543448Z" level=info msg="using legacy CRI server" Jan 17 12:17:01.547610 containerd[1456]: time="2025-01-17T12:17:01.547595876Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:17:01.547819 containerd[1456]: time="2025-01-17T12:17:01.547799739Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:17:01.548629 containerd[1456]: time="2025-01-17T12:17:01.548572418Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.548869374Z" level=info msg="Start subscribing containerd event" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.548944024Z" level=info msg="Start recovering state" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.549023133Z" level=info msg="Start event monitor" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.549064120Z" level=info msg="Start snapshots syncer" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.549078577Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:17:01.549295 containerd[1456]: time="2025-01-17T12:17:01.549090429Z" level=info msg="Start streaming server" Jan 17 12:17:01.549564 containerd[1456]: time="2025-01-17T12:17:01.549502131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:17:01.549594 containerd[1456]: time="2025-01-17T12:17:01.549582772Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:17:01.549770 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:17:01.549940 containerd[1456]: time="2025-01-17T12:17:01.549911108Z" level=info msg="containerd successfully booted in 0.045130s" Jan 17 12:17:01.721064 tar[1453]: linux-amd64/LICENSE Jan 17 12:17:01.721299 tar[1453]: linux-amd64/README.md Jan 17 12:17:01.737437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:17:02.692955 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 17 12:17:02.695929 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:17:02.699266 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:17:02.712934 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:17:02.716639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:02.719976 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:17:02.741994 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:17:02.742284 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:17:02.744469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:17:02.747213 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:17:03.432353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:03.434774 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:17:03.436452 systemd[1]: Startup finished in 1.097s (kernel) + 6.527s (initrd) + 4.955s (userspace) = 12.579s. Jan 17 12:17:03.465114 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:03.955336 kubelet[1541]: E0117 12:17:03.955247 1541 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:03.960379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:03.960682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:03.961193 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Jan 17 12:17:07.862685 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:17:07.864062 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:32896.service - OpenSSH per-connection server daemon (10.0.0.1:32896). Jan 17 12:17:07.905326 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 32896 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:07.907376 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:07.915586 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:17:07.925928 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:17:07.928039 systemd-logind[1439]: New session 1 of user core. Jan 17 12:17:07.939110 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:17:07.952908 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:17:07.956005 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:17:08.084534 systemd[1560]: Queued start job for default target default.target. Jan 17 12:17:08.097174 systemd[1560]: Created slice app.slice - User Application Slice. Jan 17 12:17:08.097203 systemd[1560]: Reached target paths.target - Paths. Jan 17 12:17:08.097219 systemd[1560]: Reached target timers.target - Timers. Jan 17 12:17:08.099189 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:17:08.112278 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:17:08.112446 systemd[1560]: Reached target sockets.target - Sockets. Jan 17 12:17:08.112468 systemd[1560]: Reached target basic.target - Basic System. Jan 17 12:17:08.112518 systemd[1560]: Reached target default.target - Main User Target. Jan 17 12:17:08.112560 systemd[1560]: Startup finished in 149ms. Jan 17 12:17:08.113158 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:17:08.115017 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:17:08.176484 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:32900.service - OpenSSH per-connection server daemon (10.0.0.1:32900). Jan 17 12:17:08.220072 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 32900 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.222145 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.227067 systemd-logind[1439]: New session 2 of user core. Jan 17 12:17:08.235927 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:17:08.307413 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:32916.service - OpenSSH per-connection server daemon (10.0.0.1:32916). Jan 17 12:17:08.339913 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 32916 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.341822 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.346188 systemd-logind[1439]: New session 3 of user core. Jan 17 12:17:08.355823 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:17:08.422024 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:32932.service - OpenSSH per-connection server daemon (10.0.0.1:32932). Jan 17 12:17:08.454891 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 32932 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.456874 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.461146 systemd-logind[1439]: New session 4 of user core. Jan 17 12:17:08.469955 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:17:08.496546 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.500783 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:32900.service: Deactivated successfully. Jan 17 12:17:08.502933 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:17:08.503615 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:17:08.504571 systemd-logind[1439]: Removed session 2. Jan 17 12:17:08.544721 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:32940.service - OpenSSH per-connection server daemon (10.0.0.1:32940). Jan 17 12:17:08.577534 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 32940 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.579608 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.584190 systemd-logind[1439]: New session 5 of user core. Jan 17 12:17:08.593819 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:17:08.616541 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.621733 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:32916.service: Deactivated successfully. Jan 17 12:17:08.624494 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:17:08.625273 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:17:08.626556 systemd-logind[1439]: Removed session 3. Jan 17 12:17:08.654847 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:17:08.655213 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:08.672080 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:08.688317 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:32952.service - OpenSSH per-connection server daemon (10.0.0.1:32952). Jan 17 12:17:08.720456 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 32952 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.722241 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.726915 systemd-logind[1439]: New session 6 of user core. Jan 17 12:17:08.736328 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.736875 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:17:08.743063 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:32932.service: Deactivated successfully. Jan 17 12:17:08.745170 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:17:08.745770 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:17:08.746583 systemd-logind[1439]: Removed session 4. Jan 17 12:17:08.793138 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:17:08.793480 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:08.798176 sudo[1603]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:08.805305 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:17:08.805638 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:08.828990 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:08.830568 auditctl[1606]: No rules Jan 17 12:17:08.831018 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:17:08.831271 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:08.834181 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:08.865160 augenrules[1624]: No rules Jan 17 12:17:08.867067 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:08.868405 sudo[1602]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:08.884629 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:32954.service - OpenSSH per-connection server daemon (10.0.0.1:32954). Jan 17 12:17:08.888361 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.891884 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:32940.service: Deactivated successfully. Jan 17 12:17:08.893616 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:17:08.894255 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:17:08.895157 systemd-logind[1439]: Removed session 5. Jan 17 12:17:08.916984 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 32954 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:17:08.918786 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.922612 systemd-logind[1439]: New session 7 of user core. Jan 17 12:17:08.931772 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:17:08.984158 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:17:08.984501 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:09.080518 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:09.085251 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:32952.service: Deactivated successfully. Jan 17 12:17:09.087320 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:17:09.088280 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:17:09.089267 systemd-logind[1439]: Removed session 6. Jan 17 12:17:09.478019 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:17:09.478157 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:17:10.059138 dockerd[1655]: time="2025-01-17T12:17:10.059051784Z" level=info msg="Starting up" Jan 17 12:17:11.516816 dockerd[1655]: time="2025-01-17T12:17:11.516634512Z" level=info msg="Loading containers: start." Jan 17 12:17:11.820698 kernel: Initializing XFRM netlink socket Jan 17 12:17:11.918197 systemd-networkd[1386]: docker0: Link UP Jan 17 12:17:11.936397 dockerd[1655]: time="2025-01-17T12:17:11.936327166Z" level=info msg="Loading containers: done." Jan 17 12:17:12.029044 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1961957393-merged.mount: Deactivated successfully. Jan 17 12:17:12.030476 dockerd[1655]: time="2025-01-17T12:17:12.030406919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:17:12.030638 dockerd[1655]: time="2025-01-17T12:17:12.030606733Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:17:12.030871 dockerd[1655]: time="2025-01-17T12:17:12.030834621Z" level=info msg="Daemon has completed initialization" Jan 17 12:17:12.074532 dockerd[1655]: time="2025-01-17T12:17:12.074436199Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:17:12.074777 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:17:14.211046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:14.225894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:14.443939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:14.448989 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:14.541967 kubelet[1811]: E0117 12:17:14.541739 1811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:14.549192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:14.549410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:18.748489 containerd[1456]: time="2025-01-17T12:17:18.748434188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:17:20.196577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947986174.mount: Deactivated successfully. Jan 17 12:17:22.772594 containerd[1456]: time="2025-01-17T12:17:22.772510918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:22.829220 containerd[1456]: time="2025-01-17T12:17:22.829132283Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 17 12:17:22.877145 containerd[1456]: time="2025-01-17T12:17:22.877086905Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:22.915522 containerd[1456]: time="2025-01-17T12:17:22.915457554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:22.916702 containerd[1456]: time="2025-01-17T12:17:22.916634832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 4.168154667s" Jan 17 12:17:22.916702 containerd[1456]: time="2025-01-17T12:17:22.916690066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:17:22.943592 containerd[1456]: time="2025-01-17T12:17:22.943544056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:17:24.635919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:17:24.644930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:24.804670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:24.812151 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:25.012109 kubelet[1896]: E0117 12:17:25.011962 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:25.016640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:25.016869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:25.835022 containerd[1456]: time="2025-01-17T12:17:25.834945356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.835871 containerd[1456]: time="2025-01-17T12:17:25.835808585Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 17 12:17:25.837299 containerd[1456]: time="2025-01-17T12:17:25.837262852Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.840574 containerd[1456]: time="2025-01-17T12:17:25.840493040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.842130 containerd[1456]: time="2025-01-17T12:17:25.842081128Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.898485976s" Jan 17 12:17:25.842130 containerd[1456]: time="2025-01-17T12:17:25.842124630Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:17:25.875897 containerd[1456]: time="2025-01-17T12:17:25.875831712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:17:27.761467 containerd[1456]: time="2025-01-17T12:17:27.761391199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:27.762313 containerd[1456]: time="2025-01-17T12:17:27.762243588Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 17 12:17:27.763501 containerd[1456]: time="2025-01-17T12:17:27.763460210Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:27.766499 containerd[1456]: time="2025-01-17T12:17:27.766459605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:27.767708 containerd[1456]: time="2025-01-17T12:17:27.767663813Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.891771977s" Jan 17 12:17:27.767708 containerd[1456]: time="2025-01-17T12:17:27.767702646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:17:27.794990 containerd[1456]: time="2025-01-17T12:17:27.794940386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:17:28.769617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265383431.mount: Deactivated successfully. Jan 17 12:17:29.755682 containerd[1456]: time="2025-01-17T12:17:29.755581007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:29.757798 containerd[1456]: time="2025-01-17T12:17:29.757756878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 17 12:17:29.759253 containerd[1456]: time="2025-01-17T12:17:29.759167102Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:29.761810 containerd[1456]: time="2025-01-17T12:17:29.761770595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:29.762597 containerd[1456]: time="2025-01-17T12:17:29.762544576Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.967560649s" Jan 17 12:17:29.762597 containerd[1456]: time="2025-01-17T12:17:29.762586575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:17:29.787811 containerd[1456]: time="2025-01-17T12:17:29.787775123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:17:30.319547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898861878.mount: Deactivated successfully. Jan 17 12:17:31.377382 containerd[1456]: time="2025-01-17T12:17:31.377321059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.380454 containerd[1456]: time="2025-01-17T12:17:31.380412085Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:17:31.385501 containerd[1456]: time="2025-01-17T12:17:31.385474429Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.391931 containerd[1456]: time="2025-01-17T12:17:31.391871215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.392939 containerd[1456]: time="2025-01-17T12:17:31.392900315Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.604924056s" Jan 17 12:17:31.393009 containerd[1456]: time="2025-01-17T12:17:31.392940040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:17:31.418838 containerd[1456]: time="2025-01-17T12:17:31.418788465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:17:31.914411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022725640.mount: Deactivated successfully. Jan 17 12:17:31.923839 containerd[1456]: time="2025-01-17T12:17:31.923758694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.924632 containerd[1456]: time="2025-01-17T12:17:31.924578933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:17:31.925994 containerd[1456]: time="2025-01-17T12:17:31.925962477Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.937094 containerd[1456]: time="2025-01-17T12:17:31.937012851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.938099 containerd[1456]: time="2025-01-17T12:17:31.938032813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 519.193804ms" Jan 17 12:17:31.938099 containerd[1456]: time="2025-01-17T12:17:31.938090652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:17:31.961342 containerd[1456]: time="2025-01-17T12:17:31.961284629Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:17:33.139393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495821040.mount: Deactivated successfully. Jan 17 12:17:35.135718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:17:35.153517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:35.302959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:35.308002 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:35.347699 kubelet[2053]: E0117 12:17:35.347535 2053 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:35.352272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:35.352495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:36.088269 containerd[1456]: time="2025-01-17T12:17:36.088174048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:36.089275 containerd[1456]: time="2025-01-17T12:17:36.089185240Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 17 12:17:36.090449 containerd[1456]: time="2025-01-17T12:17:36.090413058Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:36.093605 containerd[1456]: time="2025-01-17T12:17:36.093570914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:36.094602 containerd[1456]: time="2025-01-17T12:17:36.094555516Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.133221294s" Jan 17 12:17:36.094703 containerd[1456]: time="2025-01-17T12:17:36.094603828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:17:44.429470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:44.443878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:44.469761 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-7.scope)... Jan 17 12:17:44.469778 systemd[1]: Reloading... Jan 17 12:17:44.553684 zram_generator::config[2186]: No configuration found. Jan 17 12:17:44.760695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:44.844753 systemd[1]: Reloading finished in 374 ms. Jan 17 12:17:44.898392 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:17:44.898488 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:17:44.898783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:44.901881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:45.058798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:45.064016 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:17:45.104226 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:45.104226 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:17:45.104226 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:45.104738 kubelet[2235]: I0117 12:17:45.104257 2235 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:17:45.297438 kubelet[2235]: I0117 12:17:45.297380 2235 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:17:45.297438 kubelet[2235]: I0117 12:17:45.297418 2235 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:17:45.297835 kubelet[2235]: I0117 12:17:45.297715 2235 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:17:45.317106 kubelet[2235]: I0117 12:17:45.316568 2235 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:45.317826 kubelet[2235]: E0117 12:17:45.317801 2235 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.331990 kubelet[2235]: I0117 12:17:45.331941 2235 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:17:45.333974 kubelet[2235]: I0117 12:17:45.333924 2235 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:17:45.334146 kubelet[2235]: I0117 12:17:45.333969 2235 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:17:45.334810 kubelet[2235]: I0117 12:17:45.334785 2235 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:17:45.334810 kubelet[2235]: I0117 12:17:45.334801 2235 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:17:45.334966 kubelet[2235]: I0117 12:17:45.334946 2235 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:45.335798 kubelet[2235]: I0117 12:17:45.335777 2235 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:17:45.335843 kubelet[2235]: I0117 12:17:45.335800 2235 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:17:45.335864 kubelet[2235]: I0117 12:17:45.335846 2235 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:17:45.335886 kubelet[2235]: I0117 12:17:45.335867 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:17:45.340087 kubelet[2235]: W0117 12:17:45.339996 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.340087 kubelet[2235]: E0117 12:17:45.340060 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.340087 kubelet[2235]: W0117 12:17:45.339999 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.340087 kubelet[2235]: E0117 12:17:45.340093 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.341300 kubelet[2235]: I0117 12:17:45.341258 2235 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:17:45.342773 kubelet[2235]: I0117 12:17:45.342753 2235 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:17:45.342850 kubelet[2235]: W0117 12:17:45.342813 2235 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:17:45.343730 kubelet[2235]: I0117 12:17:45.343579 2235 server.go:1264] "Started kubelet" Jan 17 12:17:45.344753 kubelet[2235]: I0117 12:17:45.344702 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:17:45.345701 kubelet[2235]: I0117 12:17:45.345477 2235 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:17:45.345701 kubelet[2235]: I0117 12:17:45.345098 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:17:45.347059 kubelet[2235]: I0117 12:17:45.347002 2235 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:17:45.347303 kubelet[2235]: I0117 12:17:45.347260 2235 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:17:45.347720 kubelet[2235]: I0117 12:17:45.347394 2235 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:17:45.347720 kubelet[2235]: I0117 12:17:45.347451 2235 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:17:45.347915 kubelet[2235]: W0117 12:17:45.347873 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.348168 kubelet[2235]: E0117 12:17:45.347925 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.348307 kubelet[2235]: I0117 12:17:45.348249 2235 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:17:45.348636 kubelet[2235]: E0117 12:17:45.348454 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Jan 17 12:17:45.349802 kubelet[2235]: I0117 12:17:45.349025 2235 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:17:45.349802 kubelet[2235]: I0117 12:17:45.349117 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:17:45.349802 kubelet[2235]: E0117 12:17:45.349637 2235 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:17:45.350180 kubelet[2235]: I0117 12:17:45.350072 2235 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:17:45.351915 kubelet[2235]: E0117 12:17:45.351743 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7a0a295b0df8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:17:45.343553016 +0000 UTC m=+0.275297721,LastTimestamp:2025-01-17 12:17:45.343553016 +0000 UTC m=+0.275297721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:17:45.365435 kubelet[2235]: I0117 12:17:45.365394 2235 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:17:45.365435 kubelet[2235]: I0117 12:17:45.365412 2235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:17:45.365435 kubelet[2235]: I0117 12:17:45.365430 2235 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:45.368070 kubelet[2235]: I0117 12:17:45.368011 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:17:45.369547 kubelet[2235]: I0117 12:17:45.369497 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:17:45.369604 kubelet[2235]: I0117 12:17:45.369562 2235 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:17:45.369604 kubelet[2235]: I0117 12:17:45.369587 2235 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:17:45.369693 kubelet[2235]: E0117 12:17:45.369667 2235 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:17:45.370267 kubelet[2235]: W0117 12:17:45.370220 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.370267 kubelet[2235]: E0117 12:17:45.370290 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:45.449098 kubelet[2235]: I0117 12:17:45.448969 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:45.449609 kubelet[2235]: E0117 12:17:45.449546 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 17 12:17:45.470747 kubelet[2235]: E0117 12:17:45.470639 2235 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:17:45.549723 kubelet[2235]: E0117 12:17:45.549638 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Jan 17 12:17:45.650979 kubelet[2235]: I0117 12:17:45.650935 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:45.651758 kubelet[2235]: E0117 12:17:45.651699 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 17 12:17:45.671809 kubelet[2235]: E0117 12:17:45.671739 2235 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:17:45.796839 kubelet[2235]: I0117 12:17:45.796801 2235 policy_none.go:49] "None policy: Start" Jan 17 12:17:45.797582 kubelet[2235]: I0117 12:17:45.797561 2235 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:17:45.797679 kubelet[2235]: I0117 12:17:45.797590 2235 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:17:45.866137 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:17:45.886001 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:17:45.889797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:17:45.904150 kubelet[2235]: I0117 12:17:45.904028 2235 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:17:45.904576 kubelet[2235]: I0117 12:17:45.904359 2235 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:17:45.904576 kubelet[2235]: I0117 12:17:45.904509 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:17:45.905969 kubelet[2235]: E0117 12:17:45.905943 2235 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:17:45.951246 kubelet[2235]: E0117 12:17:45.951181 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Jan 17 12:17:46.053971 kubelet[2235]: I0117 12:17:46.053941 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:46.054313 kubelet[2235]: E0117 12:17:46.054232 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 17 12:17:46.072489 kubelet[2235]: I0117 12:17:46.072428 2235 topology_manager.go:215] "Topology Admit Handler" podUID="c5931e64c619c14b3c0c3e09750d8b08" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:17:46.073526 kubelet[2235]: I0117 12:17:46.073488 2235 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:17:46.074555 kubelet[2235]: I0117 12:17:46.074529 2235 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:17:46.080480 systemd[1]: Created slice kubepods-burstable-podc5931e64c619c14b3c0c3e09750d8b08.slice - libcontainer container kubepods-burstable-podc5931e64c619c14b3c0c3e09750d8b08.slice. Jan 17 12:17:46.109878 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 12:17:46.133847 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 12:17:46.151015 kubelet[2235]: I0117 12:17:46.150961 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:46.151015 kubelet[2235]: I0117 12:17:46.151018 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:46.151489 kubelet[2235]: I0117 12:17:46.151061 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:46.151489 kubelet[2235]: I0117 12:17:46.151090 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:46.151489 kubelet[2235]: I0117 12:17:46.151113 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:46.151489 kubelet[2235]: I0117 12:17:46.151137 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:46.151489 kubelet[2235]: I0117 12:17:46.151161 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:46.151615 kubelet[2235]: I0117 12:17:46.151179 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:46.151615 kubelet[2235]: I0117 12:17:46.151212 2235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:17:46.252145 kubelet[2235]: W0117 12:17:46.251990 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.252145 kubelet[2235]: E0117 12:17:46.252053 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.407869 kubelet[2235]: E0117 12:17:46.407801 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:46.408460 containerd[1456]: time="2025-01-17T12:17:46.408417265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c5931e64c619c14b3c0c3e09750d8b08,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:46.431731 kubelet[2235]: E0117 12:17:46.431686 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:46.432018 containerd[1456]: time="2025-01-17T12:17:46.431990136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:46.436291 kubelet[2235]: E0117 12:17:46.436261 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:46.436620 containerd[1456]: time="2025-01-17T12:17:46.436517604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:46.488633 kubelet[2235]: W0117 12:17:46.488540 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.488633 kubelet[2235]: E0117 12:17:46.488606 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.731299 kubelet[2235]: W0117 12:17:46.731216 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.731299 kubelet[2235]: E0117 12:17:46.731298 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.752239 kubelet[2235]: E0117 12:17:46.752169 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" Jan 17 12:17:46.855597 kubelet[2235]: I0117 12:17:46.855549 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:46.855941 kubelet[2235]: E0117 12:17:46.855903 2235 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 17 12:17:46.857765 update_engine[1446]: I20250117 12:17:46.857700 1446 update_attempter.cc:509] Updating boot flags... Jan 17 12:17:46.882700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2276) Jan 17 12:17:46.915851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2278) Jan 17 12:17:46.931710 kubelet[2235]: W0117 12:17:46.931403 2235 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:46.931710 kubelet[2235]: E0117 12:17:46.931514 2235 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:47.359701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250902966.mount: Deactivated successfully. Jan 17 12:17:47.367932 containerd[1456]: time="2025-01-17T12:17:47.367858323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:47.369763 containerd[1456]: time="2025-01-17T12:17:47.369675231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:17:47.370822 containerd[1456]: time="2025-01-17T12:17:47.370799344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:47.371988 containerd[1456]: time="2025-01-17T12:17:47.371956821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:47.373037 containerd[1456]: time="2025-01-17T12:17:47.372986455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:17:47.374040 containerd[1456]: time="2025-01-17T12:17:47.374007082Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:47.375049 containerd[1456]: time="2025-01-17T12:17:47.375001589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:17:47.378318 containerd[1456]: time="2025-01-17T12:17:47.378279089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:47.380491 containerd[1456]: time="2025-01-17T12:17:47.380456752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 971.953855ms" Jan 17 12:17:47.381345 containerd[1456]: time="2025-01-17T12:17:47.381289081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 949.235325ms" Jan 17 12:17:47.381922 containerd[1456]: time="2025-01-17T12:17:47.381892506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 945.325608ms" Jan 17 12:17:47.517909 kubelet[2235]: E0117 12:17:47.517794 2235 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.101:6443: connect: connection refused Jan 17 12:17:47.540320 containerd[1456]: time="2025-01-17T12:17:47.539754906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:47.540320 containerd[1456]: time="2025-01-17T12:17:47.539833415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:47.540320 containerd[1456]: time="2025-01-17T12:17:47.539851980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.540320 containerd[1456]: time="2025-01-17T12:17:47.539946259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.540452240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.542763737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.542484827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.542552425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.542563947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.542861 containerd[1456]: time="2025-01-17T12:17:47.542670339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.543066 containerd[1456]: time="2025-01-17T12:17:47.542782051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.543160 containerd[1456]: time="2025-01-17T12:17:47.543084255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:47.576947 systemd[1]: Started cri-containerd-3c9a7219f0f5445f1460679b0786adc0cf008368ffb5a1e99e9cde5e81e57e6b.scope - libcontainer container 3c9a7219f0f5445f1460679b0786adc0cf008368ffb5a1e99e9cde5e81e57e6b. Jan 17 12:17:47.578856 systemd[1]: Started cri-containerd-5e5ce6036674edeeb93bf82a2ea2346e63f85bcdd248a76b97232a3e685a7f41.scope - libcontainer container 5e5ce6036674edeeb93bf82a2ea2346e63f85bcdd248a76b97232a3e685a7f41. Jan 17 12:17:47.581548 systemd[1]: Started cri-containerd-d2f5ae0f61186349459a91506420f2e86891949f6688df8ebd7f17fd7b2353b2.scope - libcontainer container d2f5ae0f61186349459a91506420f2e86891949f6688df8ebd7f17fd7b2353b2. Jan 17 12:17:47.622038 containerd[1456]: time="2025-01-17T12:17:47.618809611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c5931e64c619c14b3c0c3e09750d8b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e5ce6036674edeeb93bf82a2ea2346e63f85bcdd248a76b97232a3e685a7f41\"" Jan 17 12:17:47.622225 kubelet[2235]: E0117 12:17:47.620355 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:47.630327 containerd[1456]: time="2025-01-17T12:17:47.630239511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c9a7219f0f5445f1460679b0786adc0cf008368ffb5a1e99e9cde5e81e57e6b\"" Jan 17 12:17:47.632325 containerd[1456]: time="2025-01-17T12:17:47.632292117Z" level=info msg="CreateContainer within sandbox \"5e5ce6036674edeeb93bf82a2ea2346e63f85bcdd248a76b97232a3e685a7f41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:17:47.632970 kubelet[2235]: E0117 12:17:47.632927 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:47.633612 containerd[1456]: time="2025-01-17T12:17:47.633533332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2f5ae0f61186349459a91506420f2e86891949f6688df8ebd7f17fd7b2353b2\"" Jan 17 12:17:47.634631 kubelet[2235]: E0117 12:17:47.634609 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:47.636078 containerd[1456]: time="2025-01-17T12:17:47.636027946Z" level=info msg="CreateContainer within sandbox \"3c9a7219f0f5445f1460679b0786adc0cf008368ffb5a1e99e9cde5e81e57e6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:17:47.636929 containerd[1456]: time="2025-01-17T12:17:47.636894651Z" level=info msg="CreateContainer within sandbox \"d2f5ae0f61186349459a91506420f2e86891949f6688df8ebd7f17fd7b2353b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:17:47.656125 containerd[1456]: time="2025-01-17T12:17:47.655981002Z" level=info msg="CreateContainer within sandbox \"5e5ce6036674edeeb93bf82a2ea2346e63f85bcdd248a76b97232a3e685a7f41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c795ef531f4785f9538cc3b1334c8a19e4dbd6c1a2ff0de52447e59577a8cac\"" Jan 17 12:17:47.656849 containerd[1456]: time="2025-01-17T12:17:47.656760241Z" level=info msg="StartContainer for \"0c795ef531f4785f9538cc3b1334c8a19e4dbd6c1a2ff0de52447e59577a8cac\"" Jan 17 12:17:47.662593 containerd[1456]: time="2025-01-17T12:17:47.662551511Z" level=info msg="CreateContainer within sandbox \"3c9a7219f0f5445f1460679b0786adc0cf008368ffb5a1e99e9cde5e81e57e6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"07411fe7f49e9a2f8601ae4ff63a15e2b946ac3cbb219f32ec0a395354b5a52a\"" Jan 17 12:17:47.663120 containerd[1456]: time="2025-01-17T12:17:47.663096756Z" level=info msg="StartContainer for \"07411fe7f49e9a2f8601ae4ff63a15e2b946ac3cbb219f32ec0a395354b5a52a\"" Jan 17 12:17:47.667347 containerd[1456]: time="2025-01-17T12:17:47.667308929Z" level=info msg="CreateContainer within sandbox \"d2f5ae0f61186349459a91506420f2e86891949f6688df8ebd7f17fd7b2353b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d45577162d163b2035da94514f8301413abdae54be9eb5c43d3c80188390cbbb\"" Jan 17 12:17:47.668029 containerd[1456]: time="2025-01-17T12:17:47.667984752Z" level=info msg="StartContainer for \"d45577162d163b2035da94514f8301413abdae54be9eb5c43d3c80188390cbbb\"" Jan 17 12:17:47.687833 systemd[1]: Started cri-containerd-0c795ef531f4785f9538cc3b1334c8a19e4dbd6c1a2ff0de52447e59577a8cac.scope - libcontainer container 0c795ef531f4785f9538cc3b1334c8a19e4dbd6c1a2ff0de52447e59577a8cac. Jan 17 12:17:47.691972 systemd[1]: Started cri-containerd-07411fe7f49e9a2f8601ae4ff63a15e2b946ac3cbb219f32ec0a395354b5a52a.scope - libcontainer container 07411fe7f49e9a2f8601ae4ff63a15e2b946ac3cbb219f32ec0a395354b5a52a. Jan 17 12:17:47.698922 systemd[1]: Started cri-containerd-d45577162d163b2035da94514f8301413abdae54be9eb5c43d3c80188390cbbb.scope - libcontainer container d45577162d163b2035da94514f8301413abdae54be9eb5c43d3c80188390cbbb. Jan 17 12:17:47.736019 containerd[1456]: time="2025-01-17T12:17:47.735852026Z" level=info msg="StartContainer for \"0c795ef531f4785f9538cc3b1334c8a19e4dbd6c1a2ff0de52447e59577a8cac\" returns successfully" Jan 17 12:17:47.743905 containerd[1456]: time="2025-01-17T12:17:47.743861225Z" level=info msg="StartContainer for \"07411fe7f49e9a2f8601ae4ff63a15e2b946ac3cbb219f32ec0a395354b5a52a\" returns successfully" Jan 17 12:17:47.750920 containerd[1456]: time="2025-01-17T12:17:47.750791266Z" level=info msg="StartContainer for \"d45577162d163b2035da94514f8301413abdae54be9eb5c43d3c80188390cbbb\" returns successfully" Jan 17 12:17:48.385864 kubelet[2235]: E0117 12:17:48.385812 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:48.387846 kubelet[2235]: E0117 12:17:48.387815 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:48.389101 kubelet[2235]: E0117 12:17:48.389068 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:48.458383 kubelet[2235]: I0117 12:17:48.458053 2235 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:48.682247 kubelet[2235]: E0117 12:17:48.682117 2235 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:17:48.786744 kubelet[2235]: I0117 12:17:48.786699 2235 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:17:48.793513 kubelet[2235]: E0117 12:17:48.793468 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:48.894645 kubelet[2235]: E0117 12:17:48.894574 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:48.995432 kubelet[2235]: E0117 12:17:48.995337 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.096306 kubelet[2235]: E0117 12:17:49.096245 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.196744 kubelet[2235]: E0117 12:17:49.196685 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.297270 kubelet[2235]: E0117 12:17:49.297163 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.390811 kubelet[2235]: E0117 12:17:49.390778 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:49.397551 kubelet[2235]: E0117 12:17:49.397527 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.480910 kubelet[2235]: E0117 12:17:49.480866 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:49.498773 kubelet[2235]: E0117 12:17:49.498703 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.599500 kubelet[2235]: E0117 12:17:49.599333 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.700246 kubelet[2235]: E0117 12:17:49.700180 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:49.800850 kubelet[2235]: E0117 12:17:49.800797 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:17:50.339331 kubelet[2235]: I0117 12:17:50.339298 2235 apiserver.go:52] "Watching apiserver" Jan 17 12:17:50.348219 kubelet[2235]: I0117 12:17:50.348196 2235 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:17:50.495314 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-7.scope)... Jan 17 12:17:50.495329 systemd[1]: Reloading... Jan 17 12:17:50.567695 zram_generator::config[2572]: No configuration found. Jan 17 12:17:50.683116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:50.785366 systemd[1]: Reloading finished in 289 ms. Jan 17 12:17:50.834977 kubelet[2235]: I0117 12:17:50.834912 2235 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:50.835231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:50.850281 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:17:50.850563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:50.858876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:51.026605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:51.033045 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:17:51.086341 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:51.086341 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:17:51.086341 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:51.086341 kubelet[2614]: I0117 12:17:51.085820 2614 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:17:51.092199 kubelet[2614]: I0117 12:17:51.092090 2614 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:17:51.092199 kubelet[2614]: I0117 12:17:51.092130 2614 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:17:51.092456 kubelet[2614]: I0117 12:17:51.092414 2614 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:17:51.094290 kubelet[2614]: I0117 12:17:51.094255 2614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:17:51.095744 kubelet[2614]: I0117 12:17:51.095698 2614 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:51.104411 kubelet[2614]: I0117 12:17:51.104377 2614 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:17:51.104621 kubelet[2614]: I0117 12:17:51.104588 2614 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:17:51.105046 kubelet[2614]: I0117 12:17:51.104856 2614 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:17:51.105164 kubelet[2614]: I0117 12:17:51.105052 2614 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:17:51.105164 kubelet[2614]: I0117 12:17:51.105062 2614 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:17:51.105164 kubelet[2614]: I0117 12:17:51.105107 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:51.105251 kubelet[2614]: I0117 12:17:51.105200 2614 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:17:51.105251 kubelet[2614]: I0117 12:17:51.105213 2614 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:17:51.105251 kubelet[2614]: I0117 12:17:51.105236 2614 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:17:51.105327 kubelet[2614]: I0117 12:17:51.105264 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:17:51.106006 kubelet[2614]: I0117 12:17:51.105964 2614 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:17:51.106881 kubelet[2614]: I0117 12:17:51.106861 2614 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:17:51.107991 kubelet[2614]: I0117 12:17:51.107967 2614 server.go:1264] "Started kubelet" Jan 17 12:17:51.110278 kubelet[2614]: I0117 12:17:51.110259 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:17:51.117900 kubelet[2614]: I0117 12:17:51.117614 2614 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:17:51.119086 kubelet[2614]: I0117 12:17:51.119055 2614 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:17:51.120479 kubelet[2614]: I0117 12:17:51.120407 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:17:51.121717 kubelet[2614]: I0117 12:17:51.120755 2614 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:17:51.121717 kubelet[2614]: I0117 12:17:51.121187 2614 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:17:51.123002 kubelet[2614]: I0117 12:17:51.122600 2614 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:17:51.123002 kubelet[2614]: I0117 12:17:51.122837 2614 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:17:51.124321 kubelet[2614]: I0117 12:17:51.124288 2614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:17:51.125231 kubelet[2614]: E0117 12:17:51.125174 2614 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:17:51.125665 kubelet[2614]: I0117 12:17:51.125631 2614 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:17:51.125715 kubelet[2614]: I0117 12:17:51.125650 2614 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:17:51.127911 kubelet[2614]: I0117 12:17:51.127786 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:17:51.129718 kubelet[2614]: I0117 12:17:51.129497 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:17:51.129718 kubelet[2614]: I0117 12:17:51.129529 2614 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:17:51.129718 kubelet[2614]: I0117 12:17:51.129549 2614 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:17:51.129718 kubelet[2614]: E0117 12:17:51.129595 2614 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:17:51.165684 kubelet[2614]: I0117 12:17:51.165629 2614 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:17:51.165684 kubelet[2614]: I0117 12:17:51.165674 2614 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:17:51.165684 kubelet[2614]: I0117 12:17:51.165698 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:51.165906 kubelet[2614]: I0117 12:17:51.165876 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:17:51.165949 kubelet[2614]: I0117 12:17:51.165892 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:17:51.165949 kubelet[2614]: I0117 12:17:51.165918 2614 policy_none.go:49] "None policy: Start" Jan 17 12:17:51.166736 kubelet[2614]: I0117 12:17:51.166713 2614 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:17:51.166788 kubelet[2614]: I0117 12:17:51.166742 2614 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:17:51.166959 kubelet[2614]: I0117 12:17:51.166931 2614 state_mem.go:75] "Updated machine memory state" Jan 17 12:17:51.171560 kubelet[2614]: I0117 12:17:51.171486 2614 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:17:51.171802 kubelet[2614]: I0117 12:17:51.171747 2614 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:17:51.171956 kubelet[2614]: I0117 12:17:51.171926 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:17:51.227488 kubelet[2614]: I0117 12:17:51.227434 2614 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:17:51.229866 kubelet[2614]: I0117 12:17:51.229779 2614 topology_manager.go:215] "Topology Admit Handler" podUID="c5931e64c619c14b3c0c3e09750d8b08" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:17:51.230097 kubelet[2614]: I0117 12:17:51.229890 2614 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:17:51.230097 kubelet[2614]: I0117 12:17:51.229995 2614 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:17:51.241038 kubelet[2614]: I0117 12:17:51.240984 2614 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:17:51.241165 kubelet[2614]: I0117 12:17:51.241122 2614 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:17:51.324314 kubelet[2614]: I0117 12:17:51.324151 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:51.324314 kubelet[2614]: I0117 12:17:51.324202 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:51.324314 kubelet[2614]: I0117 12:17:51.324255 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:51.324314 kubelet[2614]: I0117 12:17:51.324278 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:51.324314 kubelet[2614]: I0117 12:17:51.324314 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:17:51.324594 kubelet[2614]: I0117 12:17:51.324340 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:51.324594 kubelet[2614]: I0117 12:17:51.324363 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5931e64c619c14b3c0c3e09750d8b08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c5931e64c619c14b3c0c3e09750d8b08\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:51.324594 kubelet[2614]: I0117 12:17:51.324383 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:51.324594 kubelet[2614]: I0117 12:17:51.324403 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:17:51.545832 kubelet[2614]: E0117 12:17:51.545789 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:51.547350 kubelet[2614]: E0117 12:17:51.547330 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:51.548219 kubelet[2614]: E0117 12:17:51.547913 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:52.107685 kubelet[2614]: I0117 12:17:52.105539 2614 apiserver.go:52] "Watching apiserver" Jan 17 12:17:52.123876 kubelet[2614]: I0117 12:17:52.123813 2614 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:17:52.145610 kubelet[2614]: E0117 12:17:52.145562 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:52.161364 kubelet[2614]: E0117 12:17:52.160742 2614 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:17:52.161364 kubelet[2614]: E0117 12:17:52.161278 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:52.161829 kubelet[2614]: E0117 12:17:52.161810 2614 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 12:17:52.162160 kubelet[2614]: E0117 12:17:52.162143 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:52.242935 kubelet[2614]: I0117 12:17:52.242840 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.242822178 podStartE2EDuration="1.242822178s" podCreationTimestamp="2025-01-17 12:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:52.233435863 +0000 UTC m=+1.195990179" watchObservedRunningTime="2025-01-17 12:17:52.242822178 +0000 UTC m=+1.205376493" Jan 17 12:17:52.251901 kubelet[2614]: I0117 12:17:52.251610 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.25156718 podStartE2EDuration="1.25156718s" podCreationTimestamp="2025-01-17 12:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:52.243104222 +0000 UTC m=+1.205658537" watchObservedRunningTime="2025-01-17 12:17:52.25156718 +0000 UTC m=+1.214121495" Jan 17 12:17:53.146733 kubelet[2614]: E0117 12:17:53.146523 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:53.146733 kubelet[2614]: E0117 12:17:53.146674 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:54.147750 kubelet[2614]: E0117 12:17:54.147696 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:55.121882 kubelet[2614]: E0117 12:17:55.121811 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:17:55.786612 sudo[1635]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:55.789029 sshd[1630]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:55.794122 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:32954.service: Deactivated successfully. Jan 17 12:17:55.796511 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:17:55.796802 systemd[1]: session-7.scope: Consumed 5.565s CPU time, 195.1M memory peak, 0B memory swap peak. Jan 17 12:17:55.797366 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:17:55.798513 systemd-logind[1439]: Removed session 7. Jan 17 12:18:00.269365 kubelet[2614]: E0117 12:18:00.269296 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:00.285926 kubelet[2614]: I0117 12:18:00.285859 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.285821103 podStartE2EDuration="9.285821103s" podCreationTimestamp="2025-01-17 12:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:52.251864472 +0000 UTC m=+1.214418797" watchObservedRunningTime="2025-01-17 12:18:00.285821103 +0000 UTC m=+9.248375418" Jan 17 12:18:01.161363 kubelet[2614]: E0117 12:18:01.161321 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:01.741108 kubelet[2614]: E0117 12:18:01.741054 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:02.162723 kubelet[2614]: E0117 12:18:02.162687 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:04.798381 kubelet[2614]: I0117 12:18:04.798302 2614 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:18:04.798954 kubelet[2614]: I0117 12:18:04.798835 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:18:04.798985 containerd[1456]: time="2025-01-17T12:18:04.798644908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:18:05.124161 kubelet[2614]: E0117 12:18:05.123982 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:05.794409 kubelet[2614]: I0117 12:18:05.794322 2614 topology_manager.go:215] "Topology Admit Handler" podUID="7740c5b1-153a-4bc6-8ae5-b9adef040b83" podNamespace="kube-system" podName="kube-proxy-d9tr9" Jan 17 12:18:05.802899 systemd[1]: Created slice kubepods-besteffort-pod7740c5b1_153a_4bc6_8ae5_b9adef040b83.slice - libcontainer container kubepods-besteffort-pod7740c5b1_153a_4bc6_8ae5_b9adef040b83.slice. Jan 17 12:18:05.814597 kubelet[2614]: I0117 12:18:05.814546 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7740c5b1-153a-4bc6-8ae5-b9adef040b83-kube-proxy\") pod \"kube-proxy-d9tr9\" (UID: \"7740c5b1-153a-4bc6-8ae5-b9adef040b83\") " pod="kube-system/kube-proxy-d9tr9" Jan 17 12:18:05.814597 kubelet[2614]: I0117 12:18:05.814593 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7740c5b1-153a-4bc6-8ae5-b9adef040b83-xtables-lock\") pod \"kube-proxy-d9tr9\" (UID: \"7740c5b1-153a-4bc6-8ae5-b9adef040b83\") " pod="kube-system/kube-proxy-d9tr9" Jan 17 12:18:05.815347 kubelet[2614]: I0117 12:18:05.814622 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7740c5b1-153a-4bc6-8ae5-b9adef040b83-lib-modules\") pod \"kube-proxy-d9tr9\" (UID: \"7740c5b1-153a-4bc6-8ae5-b9adef040b83\") " pod="kube-system/kube-proxy-d9tr9" Jan 17 12:18:05.815347 kubelet[2614]: I0117 12:18:05.814646 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnw94\" (UniqueName: \"kubernetes.io/projected/7740c5b1-153a-4bc6-8ae5-b9adef040b83-kube-api-access-vnw94\") pod \"kube-proxy-d9tr9\" (UID: \"7740c5b1-153a-4bc6-8ae5-b9adef040b83\") " pod="kube-system/kube-proxy-d9tr9" Jan 17 12:18:05.834927 kubelet[2614]: I0117 12:18:05.834806 2614 topology_manager.go:215] "Topology Admit Handler" podUID="39f04c1d-2f3e-4ad4-9040-e7ec56e484ff" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-jsqk2" Jan 17 12:18:05.846086 systemd[1]: Created slice kubepods-besteffort-pod39f04c1d_2f3e_4ad4_9040_e7ec56e484ff.slice - libcontainer container kubepods-besteffort-pod39f04c1d_2f3e_4ad4_9040_e7ec56e484ff.slice. Jan 17 12:18:05.915414 kubelet[2614]: I0117 12:18:05.915326 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39f04c1d-2f3e-4ad4-9040-e7ec56e484ff-var-lib-calico\") pod \"tigera-operator-7bc55997bb-jsqk2\" (UID: \"39f04c1d-2f3e-4ad4-9040-e7ec56e484ff\") " pod="tigera-operator/tigera-operator-7bc55997bb-jsqk2" Jan 17 12:18:05.915414 kubelet[2614]: I0117 12:18:05.915417 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ggw\" (UniqueName: \"kubernetes.io/projected/39f04c1d-2f3e-4ad4-9040-e7ec56e484ff-kube-api-access-c8ggw\") pod \"tigera-operator-7bc55997bb-jsqk2\" (UID: \"39f04c1d-2f3e-4ad4-9040-e7ec56e484ff\") " pod="tigera-operator/tigera-operator-7bc55997bb-jsqk2" Jan 17 12:18:06.115097 kubelet[2614]: E0117 12:18:06.114924 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:06.116550 containerd[1456]: time="2025-01-17T12:18:06.116480391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9tr9,Uid:7740c5b1-153a-4bc6-8ae5-b9adef040b83,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:06.150391 containerd[1456]: time="2025-01-17T12:18:06.150030571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jsqk2,Uid:39f04c1d-2f3e-4ad4-9040-e7ec56e484ff,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:18:06.153529 containerd[1456]: time="2025-01-17T12:18:06.153376808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:06.153529 containerd[1456]: time="2025-01-17T12:18:06.153480253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:06.153529 containerd[1456]: time="2025-01-17T12:18:06.153500811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:06.153894 containerd[1456]: time="2025-01-17T12:18:06.153627510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:06.182905 systemd[1]: Started cri-containerd-7d08ac34c00c743efa4aa104d40d3220c42781e115cd0b993c14a190b85836b2.scope - libcontainer container 7d08ac34c00c743efa4aa104d40d3220c42781e115cd0b993c14a190b85836b2. Jan 17 12:18:06.190987 containerd[1456]: time="2025-01-17T12:18:06.190612443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:06.190987 containerd[1456]: time="2025-01-17T12:18:06.190737779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:06.190987 containerd[1456]: time="2025-01-17T12:18:06.190751775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:06.190987 containerd[1456]: time="2025-01-17T12:18:06.190841403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:06.211881 systemd[1]: Started cri-containerd-f9f0246837c07562d56875c5528a652321c4ca4f6a00e07d7f7a033542edc4ad.scope - libcontainer container f9f0246837c07562d56875c5528a652321c4ca4f6a00e07d7f7a033542edc4ad. Jan 17 12:18:06.212679 containerd[1456]: time="2025-01-17T12:18:06.212311137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9tr9,Uid:7740c5b1-153a-4bc6-8ae5-b9adef040b83,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d08ac34c00c743efa4aa104d40d3220c42781e115cd0b993c14a190b85836b2\"" Jan 17 12:18:06.213210 kubelet[2614]: E0117 12:18:06.213189 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:06.215704 containerd[1456]: time="2025-01-17T12:18:06.215356920Z" level=info msg="CreateContainer within sandbox \"7d08ac34c00c743efa4aa104d40d3220c42781e115cd0b993c14a190b85836b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:18:06.238031 containerd[1456]: time="2025-01-17T12:18:06.237961499Z" level=info msg="CreateContainer within sandbox \"7d08ac34c00c743efa4aa104d40d3220c42781e115cd0b993c14a190b85836b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9e3023adc80e4ca94c3105aac606d292820a5dc613089e49ee84951a470565ef\"" Jan 17 12:18:06.238584 containerd[1456]: time="2025-01-17T12:18:06.238557571Z" level=info msg="StartContainer for \"9e3023adc80e4ca94c3105aac606d292820a5dc613089e49ee84951a470565ef\"" Jan 17 12:18:06.252422 containerd[1456]: time="2025-01-17T12:18:06.252381827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jsqk2,Uid:39f04c1d-2f3e-4ad4-9040-e7ec56e484ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9f0246837c07562d56875c5528a652321c4ca4f6a00e07d7f7a033542edc4ad\"" Jan 17 12:18:06.254542 containerd[1456]: time="2025-01-17T12:18:06.254416566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:18:06.271824 systemd[1]: Started cri-containerd-9e3023adc80e4ca94c3105aac606d292820a5dc613089e49ee84951a470565ef.scope - libcontainer container 9e3023adc80e4ca94c3105aac606d292820a5dc613089e49ee84951a470565ef. Jan 17 12:18:06.302088 containerd[1456]: time="2025-01-17T12:18:06.302038748Z" level=info msg="StartContainer for \"9e3023adc80e4ca94c3105aac606d292820a5dc613089e49ee84951a470565ef\" returns successfully" Jan 17 12:18:07.173271 kubelet[2614]: E0117 12:18:07.173199 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:10.309693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473012520.mount: Deactivated successfully. Jan 17 12:18:10.742669 containerd[1456]: time="2025-01-17T12:18:10.742586010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:10.743527 containerd[1456]: time="2025-01-17T12:18:10.743483177Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764305" Jan 17 12:18:10.744679 containerd[1456]: time="2025-01-17T12:18:10.744627789Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:10.747281 containerd[1456]: time="2025-01-17T12:18:10.747220615Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:10.747982 containerd[1456]: time="2025-01-17T12:18:10.747938405Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.493483105s" Jan 17 12:18:10.747982 containerd[1456]: time="2025-01-17T12:18:10.747970415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:18:10.755839 containerd[1456]: time="2025-01-17T12:18:10.755782004Z" level=info msg="CreateContainer within sandbox \"f9f0246837c07562d56875c5528a652321c4ca4f6a00e07d7f7a033542edc4ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:18:10.770081 containerd[1456]: time="2025-01-17T12:18:10.770034054Z" level=info msg="CreateContainer within sandbox \"f9f0246837c07562d56875c5528a652321c4ca4f6a00e07d7f7a033542edc4ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a88e0ab235280e079a8e7bc2547a2823c71f03c121583181da47f41aeba05732\"" Jan 17 12:18:10.770679 containerd[1456]: time="2025-01-17T12:18:10.770454715Z" level=info msg="StartContainer for \"a88e0ab235280e079a8e7bc2547a2823c71f03c121583181da47f41aeba05732\"" Jan 17 12:18:10.807036 systemd[1]: Started cri-containerd-a88e0ab235280e079a8e7bc2547a2823c71f03c121583181da47f41aeba05732.scope - libcontainer container a88e0ab235280e079a8e7bc2547a2823c71f03c121583181da47f41aeba05732. Jan 17 12:18:10.837866 containerd[1456]: time="2025-01-17T12:18:10.837820742Z" level=info msg="StartContainer for \"a88e0ab235280e079a8e7bc2547a2823c71f03c121583181da47f41aeba05732\" returns successfully" Jan 17 12:18:11.189854 kubelet[2614]: I0117 12:18:11.189707 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9tr9" podStartSLOduration=6.189684607 podStartE2EDuration="6.189684607s" podCreationTimestamp="2025-01-17 12:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:07.254442893 +0000 UTC m=+16.216997209" watchObservedRunningTime="2025-01-17 12:18:11.189684607 +0000 UTC m=+20.152238922" Jan 17 12:18:11.189854 kubelet[2614]: I0117 12:18:11.189855 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-jsqk2" podStartSLOduration=1.6931209379999999 podStartE2EDuration="6.189848495s" podCreationTimestamp="2025-01-17 12:18:05 +0000 UTC" firstStartedPulling="2025-01-17 12:18:06.253903461 +0000 UTC m=+15.216457776" lastFinishedPulling="2025-01-17 12:18:10.750631018 +0000 UTC m=+19.713185333" observedRunningTime="2025-01-17 12:18:11.18955359 +0000 UTC m=+20.152107905" watchObservedRunningTime="2025-01-17 12:18:11.189848495 +0000 UTC m=+20.152402810" Jan 17 12:18:14.050727 kubelet[2614]: I0117 12:18:14.048549 2614 topology_manager.go:215] "Topology Admit Handler" podUID="2626b2e8-3d7d-4d87-acf6-61679e0ab979" podNamespace="calico-system" podName="calico-typha-f99b67d7d-k24t6" Jan 17 12:18:14.060325 kubelet[2614]: I0117 12:18:14.060284 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2626b2e8-3d7d-4d87-acf6-61679e0ab979-tigera-ca-bundle\") pod \"calico-typha-f99b67d7d-k24t6\" (UID: \"2626b2e8-3d7d-4d87-acf6-61679e0ab979\") " pod="calico-system/calico-typha-f99b67d7d-k24t6" Jan 17 12:18:14.060325 kubelet[2614]: I0117 12:18:14.060325 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2626b2e8-3d7d-4d87-acf6-61679e0ab979-typha-certs\") pod \"calico-typha-f99b67d7d-k24t6\" (UID: \"2626b2e8-3d7d-4d87-acf6-61679e0ab979\") " pod="calico-system/calico-typha-f99b67d7d-k24t6" Jan 17 12:18:14.060513 kubelet[2614]: I0117 12:18:14.060346 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk4z\" (UniqueName: \"kubernetes.io/projected/2626b2e8-3d7d-4d87-acf6-61679e0ab979-kube-api-access-8fk4z\") pod \"calico-typha-f99b67d7d-k24t6\" (UID: \"2626b2e8-3d7d-4d87-acf6-61679e0ab979\") " pod="calico-system/calico-typha-f99b67d7d-k24t6" Jan 17 12:18:14.063212 systemd[1]: Created slice kubepods-besteffort-pod2626b2e8_3d7d_4d87_acf6_61679e0ab979.slice - libcontainer container kubepods-besteffort-pod2626b2e8_3d7d_4d87_acf6_61679e0ab979.slice. Jan 17 12:18:14.141689 kubelet[2614]: I0117 12:18:14.139306 2614 topology_manager.go:215] "Topology Admit Handler" podUID="6d29dd69-e186-446d-97ad-2818cb7b170b" podNamespace="calico-system" podName="calico-node-bg2c5" Jan 17 12:18:14.151944 systemd[1]: Created slice kubepods-besteffort-pod6d29dd69_e186_446d_97ad_2818cb7b170b.slice - libcontainer container kubepods-besteffort-pod6d29dd69_e186_446d_97ad_2818cb7b170b.slice. Jan 17 12:18:14.161159 kubelet[2614]: I0117 12:18:14.161097 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-flexvol-driver-host\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161159 kubelet[2614]: I0117 12:18:14.161137 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-policysync\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161159 kubelet[2614]: I0117 12:18:14.161155 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d29dd69-e186-446d-97ad-2818cb7b170b-node-certs\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161159 kubelet[2614]: I0117 12:18:14.161170 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-var-run-calico\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161456 kubelet[2614]: I0117 12:18:14.161184 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-cni-bin-dir\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161456 kubelet[2614]: I0117 12:18:14.161198 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-cni-log-dir\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161456 kubelet[2614]: I0117 12:18:14.161282 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d29dd69-e186-446d-97ad-2818cb7b170b-tigera-ca-bundle\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161456 kubelet[2614]: I0117 12:18:14.161342 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-cni-net-dir\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161456 kubelet[2614]: I0117 12:18:14.161374 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkml8\" (UniqueName: \"kubernetes.io/projected/6d29dd69-e186-446d-97ad-2818cb7b170b-kube-api-access-bkml8\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161613 kubelet[2614]: I0117 12:18:14.161498 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-lib-modules\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161613 kubelet[2614]: I0117 12:18:14.161537 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-xtables-lock\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.161613 kubelet[2614]: I0117 12:18:14.161558 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d29dd69-e186-446d-97ad-2818cb7b170b-var-lib-calico\") pod \"calico-node-bg2c5\" (UID: \"6d29dd69-e186-446d-97ad-2818cb7b170b\") " pod="calico-system/calico-node-bg2c5" Jan 17 12:18:14.253462 kubelet[2614]: I0117 12:18:14.253398 2614 topology_manager.go:215] "Topology Admit Handler" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" podNamespace="calico-system" podName="csi-node-driver-b9b6b" Jan 17 12:18:14.253768 kubelet[2614]: E0117 12:18:14.253735 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:14.262513 kubelet[2614]: I0117 12:18:14.262045 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d31fd11e-f0a1-43ba-8772-07b005c2e59d-socket-dir\") pod \"csi-node-driver-b9b6b\" (UID: \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\") " pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:14.262513 kubelet[2614]: I0117 12:18:14.262094 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck858\" (UniqueName: \"kubernetes.io/projected/d31fd11e-f0a1-43ba-8772-07b005c2e59d-kube-api-access-ck858\") pod \"csi-node-driver-b9b6b\" (UID: \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\") " pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:14.262513 kubelet[2614]: I0117 12:18:14.262158 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d31fd11e-f0a1-43ba-8772-07b005c2e59d-varrun\") pod \"csi-node-driver-b9b6b\" (UID: \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\") " pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:14.262513 kubelet[2614]: I0117 12:18:14.262211 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d31fd11e-f0a1-43ba-8772-07b005c2e59d-kubelet-dir\") pod \"csi-node-driver-b9b6b\" (UID: \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\") " pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:14.262513 kubelet[2614]: I0117 12:18:14.262294 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d31fd11e-f0a1-43ba-8772-07b005c2e59d-registration-dir\") pod \"csi-node-driver-b9b6b\" (UID: \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\") " pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:14.265236 kubelet[2614]: E0117 12:18:14.265184 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.265236 kubelet[2614]: W0117 12:18:14.265210 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.265236 kubelet[2614]: E0117 12:18:14.265239 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.265496 kubelet[2614]: E0117 12:18:14.265468 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.265496 kubelet[2614]: W0117 12:18:14.265479 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.265881 kubelet[2614]: E0117 12:18:14.265518 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.265881 kubelet[2614]: E0117 12:18:14.265855 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.265881 kubelet[2614]: W0117 12:18:14.265864 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.265881 kubelet[2614]: E0117 12:18:14.265879 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.266187 kubelet[2614]: E0117 12:18:14.266158 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.266187 kubelet[2614]: W0117 12:18:14.266172 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.266187 kubelet[2614]: E0117 12:18:14.266180 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.266961 kubelet[2614]: E0117 12:18:14.266934 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.267076 kubelet[2614]: W0117 12:18:14.267054 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.267195 kubelet[2614]: E0117 12:18:14.267174 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.267501 kubelet[2614]: E0117 12:18:14.267469 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.267501 kubelet[2614]: W0117 12:18:14.267489 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.267501 kubelet[2614]: E0117 12:18:14.267505 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.289535 kubelet[2614]: E0117 12:18:14.289496 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.289535 kubelet[2614]: W0117 12:18:14.289523 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.289774 kubelet[2614]: E0117 12:18:14.289549 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.364072 kubelet[2614]: E0117 12:18:14.363926 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.364433 kubelet[2614]: W0117 12:18:14.364249 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.364433 kubelet[2614]: E0117 12:18:14.364281 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.366671 kubelet[2614]: E0117 12:18:14.364881 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.366671 kubelet[2614]: W0117 12:18:14.364896 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.366671 kubelet[2614]: E0117 12:18:14.364919 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.366671 kubelet[2614]: E0117 12:18:14.366470 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:14.366859 kubelet[2614]: E0117 12:18:14.366845 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.366922 kubelet[2614]: W0117 12:18:14.366909 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.366995 kubelet[2614]: E0117 12:18:14.366982 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.367247 containerd[1456]: time="2025-01-17T12:18:14.367162807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f99b67d7d-k24t6,Uid:2626b2e8-3d7d-4d87-acf6-61679e0ab979,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:14.373185 kubelet[2614]: E0117 12:18:14.372752 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.373185 kubelet[2614]: W0117 12:18:14.372778 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.373185 kubelet[2614]: E0117 12:18:14.372856 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.374929 kubelet[2614]: E0117 12:18:14.374771 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.374929 kubelet[2614]: W0117 12:18:14.374784 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.374929 kubelet[2614]: E0117 12:18:14.374833 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.375168 kubelet[2614]: E0117 12:18:14.375139 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.375227 kubelet[2614]: W0117 12:18:14.375214 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.375335 kubelet[2614]: E0117 12:18:14.375316 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.375706 kubelet[2614]: E0117 12:18:14.375694 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.375892 kubelet[2614]: W0117 12:18:14.375734 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.375892 kubelet[2614]: E0117 12:18:14.375770 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.376007 kubelet[2614]: E0117 12:18:14.375995 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.378724 kubelet[2614]: W0117 12:18:14.376041 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.378724 kubelet[2614]: E0117 12:18:14.376083 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.379122 kubelet[2614]: E0117 12:18:14.378950 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.379122 kubelet[2614]: W0117 12:18:14.378968 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.379122 kubelet[2614]: E0117 12:18:14.379016 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.380892 kubelet[2614]: E0117 12:18:14.380872 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.381028 kubelet[2614]: W0117 12:18:14.380949 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.381060 kubelet[2614]: E0117 12:18:14.381023 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.381472 kubelet[2614]: E0117 12:18:14.381460 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.381600 kubelet[2614]: W0117 12:18:14.381522 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.381600 kubelet[2614]: E0117 12:18:14.381577 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.382036 kubelet[2614]: E0117 12:18:14.381907 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.382036 kubelet[2614]: W0117 12:18:14.381917 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.382036 kubelet[2614]: E0117 12:18:14.381991 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.382272 kubelet[2614]: E0117 12:18:14.382195 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.382272 kubelet[2614]: W0117 12:18:14.382205 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.382409 kubelet[2614]: E0117 12:18:14.382272 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.382524 kubelet[2614]: E0117 12:18:14.382513 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.382616 kubelet[2614]: W0117 12:18:14.382566 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.382694 kubelet[2614]: E0117 12:18:14.382614 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.383021 kubelet[2614]: E0117 12:18:14.382917 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.383021 kubelet[2614]: W0117 12:18:14.382927 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.383021 kubelet[2614]: E0117 12:18:14.382971 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.383180 kubelet[2614]: E0117 12:18:14.383166 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.383287 kubelet[2614]: W0117 12:18:14.383231 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.383317 kubelet[2614]: E0117 12:18:14.383281 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.383639 kubelet[2614]: E0117 12:18:14.383525 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.383639 kubelet[2614]: W0117 12:18:14.383539 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.383639 kubelet[2614]: E0117 12:18:14.383599 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.383817 kubelet[2614]: E0117 12:18:14.383799 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.384040 kubelet[2614]: W0117 12:18:14.383974 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.384040 kubelet[2614]: E0117 12:18:14.384030 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.384309 kubelet[2614]: E0117 12:18:14.384297 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.384438 kubelet[2614]: W0117 12:18:14.384356 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.384473 kubelet[2614]: E0117 12:18:14.384428 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.384711 kubelet[2614]: E0117 12:18:14.384689 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.385498 kubelet[2614]: W0117 12:18:14.385468 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.385614 kubelet[2614]: E0117 12:18:14.385589 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.385837 kubelet[2614]: E0117 12:18:14.385789 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.385837 kubelet[2614]: W0117 12:18:14.385826 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.386163 kubelet[2614]: E0117 12:18:14.386029 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.386212 kubelet[2614]: E0117 12:18:14.386199 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.386212 kubelet[2614]: W0117 12:18:14.386211 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.386283 kubelet[2614]: E0117 12:18:14.386251 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.386505 kubelet[2614]: E0117 12:18:14.386447 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.386505 kubelet[2614]: W0117 12:18:14.386455 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.386505 kubelet[2614]: E0117 12:18:14.386464 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.387872 kubelet[2614]: E0117 12:18:14.386727 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.387872 kubelet[2614]: W0117 12:18:14.386740 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.387872 kubelet[2614]: E0117 12:18:14.386748 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.394367 kubelet[2614]: E0117 12:18:14.394316 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.394705 kubelet[2614]: W0117 12:18:14.394465 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.394705 kubelet[2614]: E0117 12:18:14.394491 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.399311 kubelet[2614]: E0117 12:18:14.399273 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:14.399311 kubelet[2614]: W0117 12:18:14.399306 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:14.400461 kubelet[2614]: E0117 12:18:14.399330 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:14.433344 containerd[1456]: time="2025-01-17T12:18:14.433058704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:14.433344 containerd[1456]: time="2025-01-17T12:18:14.433119328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:14.433344 containerd[1456]: time="2025-01-17T12:18:14.433130619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:14.433568 containerd[1456]: time="2025-01-17T12:18:14.433228033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:14.454975 kubelet[2614]: E0117 12:18:14.454931 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:14.454955 systemd[1]: Started cri-containerd-3cb694d99dcce93bfc3644cd6d0b1511813f6bdb4a0697641a0d9056dace50a8.scope - libcontainer container 3cb694d99dcce93bfc3644cd6d0b1511813f6bdb4a0697641a0d9056dace50a8. Jan 17 12:18:14.456238 containerd[1456]: time="2025-01-17T12:18:14.455369222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bg2c5,Uid:6d29dd69-e186-446d-97ad-2818cb7b170b,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:14.486424 containerd[1456]: time="2025-01-17T12:18:14.485889607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:14.486424 containerd[1456]: time="2025-01-17T12:18:14.485945213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:14.486424 containerd[1456]: time="2025-01-17T12:18:14.485955602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:14.486424 containerd[1456]: time="2025-01-17T12:18:14.486033308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:14.510045 containerd[1456]: time="2025-01-17T12:18:14.509826672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f99b67d7d-k24t6,Uid:2626b2e8-3d7d-4d87-acf6-61679e0ab979,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cb694d99dcce93bfc3644cd6d0b1511813f6bdb4a0697641a0d9056dace50a8\"" Jan 17 12:18:14.512197 kubelet[2614]: E0117 12:18:14.512162 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:14.517209 systemd[1]: Started cri-containerd-62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5.scope - libcontainer container 62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5. Jan 17 12:18:14.518784 containerd[1456]: time="2025-01-17T12:18:14.517576126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:18:14.548455 containerd[1456]: time="2025-01-17T12:18:14.548369164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bg2c5,Uid:6d29dd69-e186-446d-97ad-2818cb7b170b,Namespace:calico-system,Attempt:0,} returns sandbox id \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\"" Jan 17 12:18:14.551107 kubelet[2614]: E0117 12:18:14.550565 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:16.130313 kubelet[2614]: E0117 12:18:16.130258 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:16.458384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609071118.mount: Deactivated successfully. Jan 17 12:18:18.129950 kubelet[2614]: E0117 12:18:18.129882 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:18.230273 containerd[1456]: time="2025-01-17T12:18:18.230203857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:18.231172 containerd[1456]: time="2025-01-17T12:18:18.231113385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:18:18.232352 containerd[1456]: time="2025-01-17T12:18:18.232326895Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:18.235163 containerd[1456]: time="2025-01-17T12:18:18.235111385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:18.236243 containerd[1456]: time="2025-01-17T12:18:18.236190473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.718571867s" Jan 17 12:18:18.236390 containerd[1456]: time="2025-01-17T12:18:18.236251146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:18:18.237958 containerd[1456]: time="2025-01-17T12:18:18.237925331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:18:18.275647 containerd[1456]: time="2025-01-17T12:18:18.275572817Z" level=info msg="CreateContainer within sandbox \"3cb694d99dcce93bfc3644cd6d0b1511813f6bdb4a0697641a0d9056dace50a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:18:18.297808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089252858.mount: Deactivated successfully. Jan 17 12:18:18.298290 containerd[1456]: time="2025-01-17T12:18:18.298220759Z" level=info msg="CreateContainer within sandbox \"3cb694d99dcce93bfc3644cd6d0b1511813f6bdb4a0697641a0d9056dace50a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9883c2e6acc107972da4ec4a3b26778070449ca46804b34ad456c791423a87ad\"" Jan 17 12:18:18.302454 containerd[1456]: time="2025-01-17T12:18:18.302420547Z" level=info msg="StartContainer for \"9883c2e6acc107972da4ec4a3b26778070449ca46804b34ad456c791423a87ad\"" Jan 17 12:18:18.340988 systemd[1]: Started cri-containerd-9883c2e6acc107972da4ec4a3b26778070449ca46804b34ad456c791423a87ad.scope - libcontainer container 9883c2e6acc107972da4ec4a3b26778070449ca46804b34ad456c791423a87ad. Jan 17 12:18:18.402509 containerd[1456]: time="2025-01-17T12:18:18.402456243Z" level=info msg="StartContainer for \"9883c2e6acc107972da4ec4a3b26778070449ca46804b34ad456c791423a87ad\" returns successfully" Jan 17 12:18:19.199073 kubelet[2614]: E0117 12:18:19.199020 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:19.289916 kubelet[2614]: E0117 12:18:19.289874 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.289916 kubelet[2614]: W0117 12:18:19.289904 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.289916 kubelet[2614]: E0117 12:18:19.289928 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.290301 kubelet[2614]: E0117 12:18:19.290283 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.290301 kubelet[2614]: W0117 12:18:19.290295 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.290385 kubelet[2614]: E0117 12:18:19.290306 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.290582 kubelet[2614]: E0117 12:18:19.290566 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.290582 kubelet[2614]: W0117 12:18:19.290577 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.290689 kubelet[2614]: E0117 12:18:19.290588 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.290899 kubelet[2614]: E0117 12:18:19.290882 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.290899 kubelet[2614]: W0117 12:18:19.290895 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.290985 kubelet[2614]: E0117 12:18:19.290906 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.291251 kubelet[2614]: E0117 12:18:19.291236 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.291251 kubelet[2614]: W0117 12:18:19.291248 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.291341 kubelet[2614]: E0117 12:18:19.291259 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.291539 kubelet[2614]: E0117 12:18:19.291523 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.291539 kubelet[2614]: W0117 12:18:19.291534 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.291621 kubelet[2614]: E0117 12:18:19.291545 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.291837 kubelet[2614]: E0117 12:18:19.291821 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.291837 kubelet[2614]: W0117 12:18:19.291832 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.292021 kubelet[2614]: E0117 12:18:19.291844 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.292101 kubelet[2614]: E0117 12:18:19.292084 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.292101 kubelet[2614]: W0117 12:18:19.292096 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.292176 kubelet[2614]: E0117 12:18:19.292107 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.292358 kubelet[2614]: E0117 12:18:19.292342 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.292358 kubelet[2614]: W0117 12:18:19.292353 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.292446 kubelet[2614]: E0117 12:18:19.292363 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.292596 kubelet[2614]: E0117 12:18:19.292581 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.292596 kubelet[2614]: W0117 12:18:19.292592 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.292690 kubelet[2614]: E0117 12:18:19.292602 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.292901 kubelet[2614]: E0117 12:18:19.292884 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.292901 kubelet[2614]: W0117 12:18:19.292895 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.292988 kubelet[2614]: E0117 12:18:19.292908 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.293163 kubelet[2614]: E0117 12:18:19.293148 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.293163 kubelet[2614]: W0117 12:18:19.293159 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.293237 kubelet[2614]: E0117 12:18:19.293169 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.293452 kubelet[2614]: E0117 12:18:19.293435 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.293452 kubelet[2614]: W0117 12:18:19.293447 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.293536 kubelet[2614]: E0117 12:18:19.293457 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.293731 kubelet[2614]: E0117 12:18:19.293713 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.293731 kubelet[2614]: W0117 12:18:19.293726 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.293818 kubelet[2614]: E0117 12:18:19.293739 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.294017 kubelet[2614]: E0117 12:18:19.294000 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.294017 kubelet[2614]: W0117 12:18:19.294012 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.294093 kubelet[2614]: E0117 12:18:19.294023 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.312524 kubelet[2614]: E0117 12:18:19.312485 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.312524 kubelet[2614]: W0117 12:18:19.312516 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.312712 kubelet[2614]: E0117 12:18:19.312540 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.312881 kubelet[2614]: E0117 12:18:19.312854 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.312881 kubelet[2614]: W0117 12:18:19.312878 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.312971 kubelet[2614]: E0117 12:18:19.312901 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.313574 kubelet[2614]: E0117 12:18:19.313290 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.313574 kubelet[2614]: W0117 12:18:19.313309 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.313574 kubelet[2614]: E0117 12:18:19.313327 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.313574 kubelet[2614]: E0117 12:18:19.313561 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.313574 kubelet[2614]: W0117 12:18:19.313569 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.313802 kubelet[2614]: E0117 12:18:19.313584 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.313840 kubelet[2614]: E0117 12:18:19.313830 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.313871 kubelet[2614]: W0117 12:18:19.313842 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.313871 kubelet[2614]: E0117 12:18:19.313865 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.314135 kubelet[2614]: E0117 12:18:19.314117 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.314135 kubelet[2614]: W0117 12:18:19.314129 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.314228 kubelet[2614]: E0117 12:18:19.314164 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.314401 kubelet[2614]: E0117 12:18:19.314376 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.314401 kubelet[2614]: W0117 12:18:19.314389 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.314477 kubelet[2614]: E0117 12:18:19.314425 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.314685 kubelet[2614]: E0117 12:18:19.314645 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.314685 kubelet[2614]: W0117 12:18:19.314679 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.314782 kubelet[2614]: E0117 12:18:19.314724 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.314948 kubelet[2614]: E0117 12:18:19.314931 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.314948 kubelet[2614]: W0117 12:18:19.314942 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.315014 kubelet[2614]: E0117 12:18:19.314961 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.315289 kubelet[2614]: E0117 12:18:19.315269 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.315289 kubelet[2614]: W0117 12:18:19.315283 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.315372 kubelet[2614]: E0117 12:18:19.315304 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.315560 kubelet[2614]: E0117 12:18:19.315541 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.315560 kubelet[2614]: W0117 12:18:19.315553 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.315630 kubelet[2614]: E0117 12:18:19.315565 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.315875 kubelet[2614]: E0117 12:18:19.315843 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.315875 kubelet[2614]: W0117 12:18:19.315863 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.315966 kubelet[2614]: E0117 12:18:19.315888 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.316098 kubelet[2614]: E0117 12:18:19.316081 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.316098 kubelet[2614]: W0117 12:18:19.316092 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.316162 kubelet[2614]: E0117 12:18:19.316105 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.316370 kubelet[2614]: E0117 12:18:19.316352 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.316370 kubelet[2614]: W0117 12:18:19.316363 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.316451 kubelet[2614]: E0117 12:18:19.316389 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.316603 kubelet[2614]: E0117 12:18:19.316585 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.316603 kubelet[2614]: W0117 12:18:19.316596 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.316838 kubelet[2614]: E0117 12:18:19.316632 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.316881 kubelet[2614]: E0117 12:18:19.316861 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.316881 kubelet[2614]: W0117 12:18:19.316869 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.316949 kubelet[2614]: E0117 12:18:19.316883 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.317239 kubelet[2614]: E0117 12:18:19.317222 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.317239 kubelet[2614]: W0117 12:18:19.317236 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.317316 kubelet[2614]: E0117 12:18:19.317255 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:19.317482 kubelet[2614]: E0117 12:18:19.317464 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:19.317482 kubelet[2614]: W0117 12:18:19.317476 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:19.317550 kubelet[2614]: E0117 12:18:19.317485 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.130159 kubelet[2614]: E0117 12:18:20.130079 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:20.200260 kubelet[2614]: I0117 12:18:20.200216 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:20.200892 kubelet[2614]: E0117 12:18:20.200873 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:20.201448 kubelet[2614]: E0117 12:18:20.201420 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.201682 kubelet[2614]: W0117 12:18:20.201445 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.201682 kubelet[2614]: E0117 12:18:20.201471 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.201866 kubelet[2614]: E0117 12:18:20.201850 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.201866 kubelet[2614]: W0117 12:18:20.201865 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.201939 kubelet[2614]: E0117 12:18:20.201881 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.202178 kubelet[2614]: E0117 12:18:20.202146 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.202178 kubelet[2614]: W0117 12:18:20.202166 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.202178 kubelet[2614]: E0117 12:18:20.202175 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.202855 kubelet[2614]: E0117 12:18:20.202822 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.202855 kubelet[2614]: W0117 12:18:20.202837 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.202855 kubelet[2614]: E0117 12:18:20.202851 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.203135 kubelet[2614]: E0117 12:18:20.203116 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.203135 kubelet[2614]: W0117 12:18:20.203132 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.203207 kubelet[2614]: E0117 12:18:20.203144 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.203414 kubelet[2614]: E0117 12:18:20.203397 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.203414 kubelet[2614]: W0117 12:18:20.203411 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.203485 kubelet[2614]: E0117 12:18:20.203435 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.203808 kubelet[2614]: E0117 12:18:20.203788 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.203808 kubelet[2614]: W0117 12:18:20.203803 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.203903 kubelet[2614]: E0117 12:18:20.203817 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.204073 kubelet[2614]: E0117 12:18:20.204054 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.204129 kubelet[2614]: W0117 12:18:20.204077 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.204129 kubelet[2614]: E0117 12:18:20.204087 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.204322 kubelet[2614]: E0117 12:18:20.204298 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.204322 kubelet[2614]: W0117 12:18:20.204311 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.204395 kubelet[2614]: E0117 12:18:20.204322 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.204712 kubelet[2614]: E0117 12:18:20.204561 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.204712 kubelet[2614]: W0117 12:18:20.204575 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.204712 kubelet[2614]: E0117 12:18:20.204597 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.204907 kubelet[2614]: E0117 12:18:20.204853 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.204907 kubelet[2614]: W0117 12:18:20.204880 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.204907 kubelet[2614]: E0117 12:18:20.204892 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.205143 kubelet[2614]: E0117 12:18:20.205131 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.205143 kubelet[2614]: W0117 12:18:20.205142 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.205197 kubelet[2614]: E0117 12:18:20.205152 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.205378 kubelet[2614]: E0117 12:18:20.205365 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.205378 kubelet[2614]: W0117 12:18:20.205377 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.205461 kubelet[2614]: E0117 12:18:20.205386 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.205599 kubelet[2614]: E0117 12:18:20.205586 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.205599 kubelet[2614]: W0117 12:18:20.205597 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.205705 kubelet[2614]: E0117 12:18:20.205606 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.205867 kubelet[2614]: E0117 12:18:20.205836 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.205867 kubelet[2614]: W0117 12:18:20.205853 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.206705 kubelet[2614]: E0117 12:18:20.205864 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.218339 kubelet[2614]: E0117 12:18:20.218308 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.218339 kubelet[2614]: W0117 12:18:20.218331 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.218430 kubelet[2614]: E0117 12:18:20.218353 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.218607 kubelet[2614]: E0117 12:18:20.218591 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.218607 kubelet[2614]: W0117 12:18:20.218602 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.218689 kubelet[2614]: E0117 12:18:20.218617 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.218958 kubelet[2614]: E0117 12:18:20.218940 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.218994 kubelet[2614]: W0117 12:18:20.218957 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.218994 kubelet[2614]: E0117 12:18:20.218978 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.219215 kubelet[2614]: E0117 12:18:20.219203 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.219251 kubelet[2614]: W0117 12:18:20.219214 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.219251 kubelet[2614]: E0117 12:18:20.219229 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.219467 kubelet[2614]: E0117 12:18:20.219454 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.219467 kubelet[2614]: W0117 12:18:20.219466 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.219523 kubelet[2614]: E0117 12:18:20.219481 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.219777 kubelet[2614]: E0117 12:18:20.219757 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.219777 kubelet[2614]: W0117 12:18:20.219774 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.219837 kubelet[2614]: E0117 12:18:20.219793 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.220071 kubelet[2614]: E0117 12:18:20.220056 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.220099 kubelet[2614]: W0117 12:18:20.220071 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.220099 kubelet[2614]: E0117 12:18:20.220087 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.220309 kubelet[2614]: E0117 12:18:20.220296 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.220309 kubelet[2614]: W0117 12:18:20.220307 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.220369 kubelet[2614]: E0117 12:18:20.220320 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.220554 kubelet[2614]: E0117 12:18:20.220540 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.220586 kubelet[2614]: W0117 12:18:20.220552 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.220586 kubelet[2614]: E0117 12:18:20.220569 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.220831 kubelet[2614]: E0117 12:18:20.220814 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.220831 kubelet[2614]: W0117 12:18:20.220831 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.220933 kubelet[2614]: E0117 12:18:20.220860 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.221079 kubelet[2614]: E0117 12:18:20.221062 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.221079 kubelet[2614]: W0117 12:18:20.221075 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.221167 kubelet[2614]: E0117 12:18:20.221109 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.221353 kubelet[2614]: E0117 12:18:20.221335 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.221353 kubelet[2614]: W0117 12:18:20.221349 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.221452 kubelet[2614]: E0117 12:18:20.221367 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.221635 kubelet[2614]: E0117 12:18:20.221618 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.221635 kubelet[2614]: W0117 12:18:20.221632 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.221753 kubelet[2614]: E0117 12:18:20.221649 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.221918 kubelet[2614]: E0117 12:18:20.221897 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.221918 kubelet[2614]: W0117 12:18:20.221910 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.222088 kubelet[2614]: E0117 12:18:20.221926 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.222144 kubelet[2614]: E0117 12:18:20.222127 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.222144 kubelet[2614]: W0117 12:18:20.222138 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.222205 kubelet[2614]: E0117 12:18:20.222150 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.222354 kubelet[2614]: E0117 12:18:20.222338 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.222354 kubelet[2614]: W0117 12:18:20.222348 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.222419 kubelet[2614]: E0117 12:18:20.222356 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.222561 kubelet[2614]: E0117 12:18:20.222546 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.222561 kubelet[2614]: W0117 12:18:20.222557 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.222620 kubelet[2614]: E0117 12:18:20.222564 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.222903 kubelet[2614]: E0117 12:18:20.222888 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:20.222903 kubelet[2614]: W0117 12:18:20.222899 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:20.222968 kubelet[2614]: E0117 12:18:20.222909 2614 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:20.408057 containerd[1456]: time="2025-01-17T12:18:20.407980226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.410325 containerd[1456]: time="2025-01-17T12:18:20.410258424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:18:20.411918 containerd[1456]: time="2025-01-17T12:18:20.411878668Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.414313 containerd[1456]: time="2025-01-17T12:18:20.414205247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.414841 containerd[1456]: time="2025-01-17T12:18:20.414754127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.176790174s" Jan 17 12:18:20.414841 containerd[1456]: time="2025-01-17T12:18:20.414783312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:18:20.417937 containerd[1456]: time="2025-01-17T12:18:20.417872855Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:18:20.436690 containerd[1456]: time="2025-01-17T12:18:20.436612147Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07\"" Jan 17 12:18:20.440762 containerd[1456]: time="2025-01-17T12:18:20.437466622Z" level=info msg="StartContainer for \"9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07\"" Jan 17 12:18:20.476892 systemd[1]: Started cri-containerd-9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07.scope - libcontainer container 9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07. Jan 17 12:18:20.518059 containerd[1456]: time="2025-01-17T12:18:20.518002980Z" level=info msg="StartContainer for \"9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07\" returns successfully" Jan 17 12:18:20.535988 systemd[1]: cri-containerd-9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07.scope: Deactivated successfully. Jan 17 12:18:20.563035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07-rootfs.mount: Deactivated successfully. Jan 17 12:18:20.627862 containerd[1456]: time="2025-01-17T12:18:20.627772259Z" level=info msg="shim disconnected" id=9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07 namespace=k8s.io Jan 17 12:18:20.627862 containerd[1456]: time="2025-01-17T12:18:20.627847921Z" level=warning msg="cleaning up after shim disconnected" id=9ad37d61649cd18a05fd296a958e19cbbcaafc2206f3f873cf9974d7055bee07 namespace=k8s.io Jan 17 12:18:20.627862 containerd[1456]: time="2025-01-17T12:18:20.627859323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:21.203421 kubelet[2614]: E0117 12:18:21.203377 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:21.204341 containerd[1456]: time="2025-01-17T12:18:21.204247875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:18:21.405311 kubelet[2614]: I0117 12:18:21.405245 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f99b67d7d-k24t6" podStartSLOduration=3.681483299 podStartE2EDuration="7.405224078s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:14.513693048 +0000 UTC m=+23.476247363" lastFinishedPulling="2025-01-17 12:18:18.237433827 +0000 UTC m=+27.199988142" observedRunningTime="2025-01-17 12:18:19.213193417 +0000 UTC m=+28.175747733" watchObservedRunningTime="2025-01-17 12:18:21.405224078 +0000 UTC m=+30.367778393" Jan 17 12:18:22.130855 kubelet[2614]: E0117 12:18:22.130777 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:24.130495 kubelet[2614]: E0117 12:18:24.130426 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:24.271002 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:60616.service - OpenSSH per-connection server daemon (10.0.0.1:60616). Jan 17 12:18:24.300399 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 60616 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:24.302254 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:24.306738 systemd-logind[1439]: New session 8 of user core. Jan 17 12:18:24.317803 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:18:24.439490 sshd[3318]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:24.444379 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:60616.service: Deactivated successfully. Jan 17 12:18:24.447403 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:18:24.449287 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:18:24.450386 systemd-logind[1439]: Removed session 8. Jan 17 12:18:26.130429 kubelet[2614]: E0117 12:18:26.130358 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:27.146964 containerd[1456]: time="2025-01-17T12:18:27.146897602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:27.147608 containerd[1456]: time="2025-01-17T12:18:27.147544717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:18:27.148737 containerd[1456]: time="2025-01-17T12:18:27.148702530Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:27.150924 containerd[1456]: time="2025-01-17T12:18:27.150877443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:27.151562 containerd[1456]: time="2025-01-17T12:18:27.151521001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.94723305s" Jan 17 12:18:27.151562 containerd[1456]: time="2025-01-17T12:18:27.151549534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:18:27.153642 containerd[1456]: time="2025-01-17T12:18:27.153608148Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:18:27.168268 containerd[1456]: time="2025-01-17T12:18:27.168213483Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862\"" Jan 17 12:18:27.168762 containerd[1456]: time="2025-01-17T12:18:27.168727428Z" level=info msg="StartContainer for \"f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862\"" Jan 17 12:18:27.195793 systemd[1]: Started cri-containerd-f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862.scope - libcontainer container f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862. Jan 17 12:18:27.301410 containerd[1456]: time="2025-01-17T12:18:27.301333185Z" level=info msg="StartContainer for \"f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862\" returns successfully" Jan 17 12:18:28.130271 kubelet[2614]: E0117 12:18:28.130217 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:28.224691 kubelet[2614]: E0117 12:18:28.223974 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:29.225542 kubelet[2614]: E0117 12:18:29.225496 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:29.456016 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:53192.service - OpenSSH per-connection server daemon (10.0.0.1:53192). Jan 17 12:18:29.514495 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 53192 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:29.516418 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:29.521839 systemd-logind[1439]: New session 9 of user core. Jan 17 12:18:29.532683 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:18:29.838540 sshd[3378]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:29.843149 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:53192.service: Deactivated successfully. Jan 17 12:18:29.845874 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:18:29.846563 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:18:29.847608 systemd-logind[1439]: Removed session 9. Jan 17 12:18:29.867103 containerd[1456]: time="2025-01-17T12:18:29.867035502Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:18:29.869983 systemd[1]: cri-containerd-f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862.scope: Deactivated successfully. Jan 17 12:18:29.883861 kubelet[2614]: I0117 12:18:29.883825 2614 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:18:29.893396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862-rootfs.mount: Deactivated successfully. Jan 17 12:18:30.037568 containerd[1456]: time="2025-01-17T12:18:30.037496788Z" level=info msg="shim disconnected" id=f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862 namespace=k8s.io Jan 17 12:18:30.037568 containerd[1456]: time="2025-01-17T12:18:30.037559596Z" level=warning msg="cleaning up after shim disconnected" id=f26753ab0c8bbbe36f174c8bd83869a22ef95c7a2bd812d8111a23d24af29862 namespace=k8s.io Jan 17 12:18:30.037568 containerd[1456]: time="2025-01-17T12:18:30.037568763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:30.056771 kubelet[2614]: I0117 12:18:30.056712 2614 topology_manager.go:215] "Topology Admit Handler" podUID="8f642288-757c-4272-856a-d51e252297f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z4vvv" Jan 17 12:18:30.063368 systemd[1]: Created slice kubepods-burstable-pod8f642288_757c_4272_856a_d51e252297f4.slice - libcontainer container kubepods-burstable-pod8f642288_757c_4272_856a_d51e252297f4.slice. Jan 17 12:18:30.113859 kubelet[2614]: I0117 12:18:30.113331 2614 topology_manager.go:215] "Topology Admit Handler" podUID="8ea49426-4d71-485c-81af-880c7b039c97" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hcv7d" Jan 17 12:18:30.113859 kubelet[2614]: I0117 12:18:30.113741 2614 topology_manager.go:215] "Topology Admit Handler" podUID="0dd97009-378f-4ef4-b765-3bec41555af3" podNamespace="calico-apiserver" podName="calico-apiserver-6d54fccbdb-4zkzk" Jan 17 12:18:30.114009 kubelet[2614]: I0117 12:18:30.113865 2614 topology_manager.go:215] "Topology Admit Handler" podUID="bc03b6ec-75c2-4b0b-bb26-44676fd171af" podNamespace="calico-system" podName="calico-kube-controllers-67f856786c-xcmdb" Jan 17 12:18:30.114009 kubelet[2614]: I0117 12:18:30.113966 2614 topology_manager.go:215] "Topology Admit Handler" podUID="e0751a22-7602-4c7d-a7ee-e530eb41ad09" podNamespace="calico-apiserver" podName="calico-apiserver-6d54fccbdb-hj6qq" Jan 17 12:18:30.122944 systemd[1]: Created slice kubepods-besteffort-pode0751a22_7602_4c7d_a7ee_e530eb41ad09.slice - libcontainer container kubepods-besteffort-pode0751a22_7602_4c7d_a7ee_e530eb41ad09.slice. Jan 17 12:18:30.128045 systemd[1]: Created slice kubepods-burstable-pod8ea49426_4d71_485c_81af_880c7b039c97.slice - libcontainer container kubepods-burstable-pod8ea49426_4d71_485c_81af_880c7b039c97.slice. Jan 17 12:18:30.134239 systemd[1]: Created slice kubepods-besteffort-pod0dd97009_378f_4ef4_b765_3bec41555af3.slice - libcontainer container kubepods-besteffort-pod0dd97009_378f_4ef4_b765_3bec41555af3.slice. Jan 17 12:18:30.140141 systemd[1]: Created slice kubepods-besteffort-podbc03b6ec_75c2_4b0b_bb26_44676fd171af.slice - libcontainer container kubepods-besteffort-podbc03b6ec_75c2_4b0b_bb26_44676fd171af.slice. Jan 17 12:18:30.145611 systemd[1]: Created slice kubepods-besteffort-podd31fd11e_f0a1_43ba_8772_07b005c2e59d.slice - libcontainer container kubepods-besteffort-podd31fd11e_f0a1_43ba_8772_07b005c2e59d.slice. Jan 17 12:18:30.148317 containerd[1456]: time="2025-01-17T12:18:30.148269631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9b6b,Uid:d31fd11e-f0a1-43ba-8772-07b005c2e59d,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:30.207503 kubelet[2614]: I0117 12:18:30.207444 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f642288-757c-4272-856a-d51e252297f4-config-volume\") pod \"coredns-7db6d8ff4d-z4vvv\" (UID: \"8f642288-757c-4272-856a-d51e252297f4\") " pod="kube-system/coredns-7db6d8ff4d-z4vvv" Jan 17 12:18:30.207698 kubelet[2614]: I0117 12:18:30.207521 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjtt\" (UniqueName: \"kubernetes.io/projected/8f642288-757c-4272-856a-d51e252297f4-kube-api-access-zcjtt\") pod \"coredns-7db6d8ff4d-z4vvv\" (UID: \"8f642288-757c-4272-856a-d51e252297f4\") " pod="kube-system/coredns-7db6d8ff4d-z4vvv" Jan 17 12:18:30.228806 kubelet[2614]: E0117 12:18:30.228764 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:30.229546 containerd[1456]: time="2025-01-17T12:18:30.229502224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:18:30.308034 kubelet[2614]: I0117 12:18:30.307968 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67c9z\" (UniqueName: \"kubernetes.io/projected/e0751a22-7602-4c7d-a7ee-e530eb41ad09-kube-api-access-67c9z\") pod \"calico-apiserver-6d54fccbdb-hj6qq\" (UID: \"e0751a22-7602-4c7d-a7ee-e530eb41ad09\") " pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" Jan 17 12:18:30.308034 kubelet[2614]: I0117 12:18:30.308039 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcrj\" (UniqueName: \"kubernetes.io/projected/8ea49426-4d71-485c-81af-880c7b039c97-kube-api-access-nxcrj\") pod \"coredns-7db6d8ff4d-hcv7d\" (UID: \"8ea49426-4d71-485c-81af-880c7b039c97\") " pod="kube-system/coredns-7db6d8ff4d-hcv7d" Jan 17 12:18:30.308314 kubelet[2614]: I0117 12:18:30.308063 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ea49426-4d71-485c-81af-880c7b039c97-config-volume\") pod \"coredns-7db6d8ff4d-hcv7d\" (UID: \"8ea49426-4d71-485c-81af-880c7b039c97\") " pod="kube-system/coredns-7db6d8ff4d-hcv7d" Jan 17 12:18:30.308314 kubelet[2614]: I0117 12:18:30.308078 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc03b6ec-75c2-4b0b-bb26-44676fd171af-tigera-ca-bundle\") pod \"calico-kube-controllers-67f856786c-xcmdb\" (UID: \"bc03b6ec-75c2-4b0b-bb26-44676fd171af\") " pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" Jan 17 12:18:30.308365 kubelet[2614]: I0117 12:18:30.308335 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0dd97009-378f-4ef4-b765-3bec41555af3-calico-apiserver-certs\") pod \"calico-apiserver-6d54fccbdb-4zkzk\" (UID: \"0dd97009-378f-4ef4-b765-3bec41555af3\") " pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" Jan 17 12:18:30.308399 kubelet[2614]: I0117 12:18:30.308357 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47xpx\" (UniqueName: \"kubernetes.io/projected/0dd97009-378f-4ef4-b765-3bec41555af3-kube-api-access-47xpx\") pod \"calico-apiserver-6d54fccbdb-4zkzk\" (UID: \"0dd97009-378f-4ef4-b765-3bec41555af3\") " pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" Jan 17 12:18:30.308441 kubelet[2614]: I0117 12:18:30.308396 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pxb\" (UniqueName: \"kubernetes.io/projected/bc03b6ec-75c2-4b0b-bb26-44676fd171af-kube-api-access-k5pxb\") pod \"calico-kube-controllers-67f856786c-xcmdb\" (UID: \"bc03b6ec-75c2-4b0b-bb26-44676fd171af\") " pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" Jan 17 12:18:30.308441 kubelet[2614]: I0117 12:18:30.308427 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0751a22-7602-4c7d-a7ee-e530eb41ad09-calico-apiserver-certs\") pod \"calico-apiserver-6d54fccbdb-hj6qq\" (UID: \"e0751a22-7602-4c7d-a7ee-e530eb41ad09\") " pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" Jan 17 12:18:30.366276 kubelet[2614]: E0117 12:18:30.365988 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:30.367219 containerd[1456]: time="2025-01-17T12:18:30.367171345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4vvv,Uid:8f642288-757c-4272-856a-d51e252297f4,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:30.430885 kubelet[2614]: E0117 12:18:30.430852 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:30.431781 containerd[1456]: time="2025-01-17T12:18:30.431444750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcv7d,Uid:8ea49426-4d71-485c-81af-880c7b039c97,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:30.447903 containerd[1456]: time="2025-01-17T12:18:30.447847694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f856786c-xcmdb,Uid:bc03b6ec-75c2-4b0b-bb26-44676fd171af,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:30.448395 containerd[1456]: time="2025-01-17T12:18:30.448089969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-4zkzk,Uid:0dd97009-378f-4ef4-b765-3bec41555af3,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:18:30.454893 containerd[1456]: time="2025-01-17T12:18:30.454828385Z" level=error msg="Failed to destroy network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.455629 containerd[1456]: time="2025-01-17T12:18:30.455232434Z" level=error msg="encountered an error cleaning up failed sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.455629 containerd[1456]: time="2025-01-17T12:18:30.455289712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9b6b,Uid:d31fd11e-f0a1-43ba-8772-07b005c2e59d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.455838 kubelet[2614]: E0117 12:18:30.455578 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.455838 kubelet[2614]: E0117 12:18:30.455645 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:30.455838 kubelet[2614]: E0117 12:18:30.455712 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9b6b" Jan 17 12:18:30.455995 kubelet[2614]: E0117 12:18:30.455771 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b9b6b_calico-system(d31fd11e-f0a1-43ba-8772-07b005c2e59d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b9b6b_calico-system(d31fd11e-f0a1-43ba-8772-07b005c2e59d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:30.523988 containerd[1456]: time="2025-01-17T12:18:30.523726480Z" level=error msg="Failed to destroy network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.524441 containerd[1456]: time="2025-01-17T12:18:30.524360460Z" level=error msg="encountered an error cleaning up failed sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.524723 containerd[1456]: time="2025-01-17T12:18:30.524642500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4vvv,Uid:8f642288-757c-4272-856a-d51e252297f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.525141 kubelet[2614]: E0117 12:18:30.525082 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.525208 kubelet[2614]: E0117 12:18:30.525168 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z4vvv" Jan 17 12:18:30.525208 kubelet[2614]: E0117 12:18:30.525194 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z4vvv" Jan 17 12:18:30.525318 kubelet[2614]: E0117 12:18:30.525250 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z4vvv_kube-system(8f642288-757c-4272-856a-d51e252297f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z4vvv_kube-system(8f642288-757c-4272-856a-d51e252297f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z4vvv" podUID="8f642288-757c-4272-856a-d51e252297f4" Jan 17 12:18:30.530508 containerd[1456]: time="2025-01-17T12:18:30.530401197Z" level=error msg="Failed to destroy network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.531085 containerd[1456]: time="2025-01-17T12:18:30.530937743Z" level=error msg="encountered an error cleaning up failed sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.531085 containerd[1456]: time="2025-01-17T12:18:30.530983049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcv7d,Uid:8ea49426-4d71-485c-81af-880c7b039c97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.531211 kubelet[2614]: E0117 12:18:30.531174 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.531248 kubelet[2614]: E0117 12:18:30.531233 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hcv7d" Jan 17 12:18:30.531278 kubelet[2614]: E0117 12:18:30.531254 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hcv7d" Jan 17 12:18:30.531322 kubelet[2614]: E0117 12:18:30.531292 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hcv7d_kube-system(8ea49426-4d71-485c-81af-880c7b039c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hcv7d_kube-system(8ea49426-4d71-485c-81af-880c7b039c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hcv7d" podUID="8ea49426-4d71-485c-81af-880c7b039c97" Jan 17 12:18:30.546348 containerd[1456]: time="2025-01-17T12:18:30.546286380Z" level=error msg="Failed to destroy network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.546797 containerd[1456]: time="2025-01-17T12:18:30.546769577Z" level=error msg="encountered an error cleaning up failed sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.546856 containerd[1456]: time="2025-01-17T12:18:30.546826093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-4zkzk,Uid:0dd97009-378f-4ef4-b765-3bec41555af3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.547116 kubelet[2614]: E0117 12:18:30.547067 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.547212 kubelet[2614]: E0117 12:18:30.547141 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" Jan 17 12:18:30.547212 kubelet[2614]: E0117 12:18:30.547164 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" Jan 17 12:18:30.547302 kubelet[2614]: E0117 12:18:30.547215 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d54fccbdb-4zkzk_calico-apiserver(0dd97009-378f-4ef4-b765-3bec41555af3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d54fccbdb-4zkzk_calico-apiserver(0dd97009-378f-4ef4-b765-3bec41555af3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" podUID="0dd97009-378f-4ef4-b765-3bec41555af3" Jan 17 12:18:30.562258 containerd[1456]: time="2025-01-17T12:18:30.562186800Z" level=error msg="Failed to destroy network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.562719 containerd[1456]: time="2025-01-17T12:18:30.562688532Z" level=error msg="encountered an error cleaning up failed sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.562838 containerd[1456]: time="2025-01-17T12:18:30.562745149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f856786c-xcmdb,Uid:bc03b6ec-75c2-4b0b-bb26-44676fd171af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.563058 kubelet[2614]: E0117 12:18:30.563013 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.563111 kubelet[2614]: E0117 12:18:30.563080 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" Jan 17 12:18:30.563111 kubelet[2614]: E0117 12:18:30.563103 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" Jan 17 12:18:30.563179 kubelet[2614]: E0117 12:18:30.563157 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67f856786c-xcmdb_calico-system(bc03b6ec-75c2-4b0b-bb26-44676fd171af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67f856786c-xcmdb_calico-system(bc03b6ec-75c2-4b0b-bb26-44676fd171af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" podUID="bc03b6ec-75c2-4b0b-bb26-44676fd171af" Jan 17 12:18:30.727811 containerd[1456]: time="2025-01-17T12:18:30.727754250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-hj6qq,Uid:e0751a22-7602-4c7d-a7ee-e530eb41ad09,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:18:30.810492 containerd[1456]: time="2025-01-17T12:18:30.810391199Z" level=error msg="Failed to destroy network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.810997 containerd[1456]: time="2025-01-17T12:18:30.810963122Z" level=error msg="encountered an error cleaning up failed sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.811064 containerd[1456]: time="2025-01-17T12:18:30.811031390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-hj6qq,Uid:e0751a22-7602-4c7d-a7ee-e530eb41ad09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.811379 kubelet[2614]: E0117 12:18:30.811327 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:30.811454 kubelet[2614]: E0117 12:18:30.811419 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" Jan 17 12:18:30.811480 kubelet[2614]: E0117 12:18:30.811454 2614 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" Jan 17 12:18:30.811553 kubelet[2614]: E0117 12:18:30.811519 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d54fccbdb-hj6qq_calico-apiserver(e0751a22-7602-4c7d-a7ee-e530eb41ad09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d54fccbdb-hj6qq_calico-apiserver(e0751a22-7602-4c7d-a7ee-e530eb41ad09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" podUID="e0751a22-7602-4c7d-a7ee-e530eb41ad09" Jan 17 12:18:30.901194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071-shm.mount: Deactivated successfully. Jan 17 12:18:31.231536 kubelet[2614]: I0117 12:18:31.231495 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:31.232123 containerd[1456]: time="2025-01-17T12:18:31.232085925Z" level=info msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" Jan 17 12:18:31.232447 containerd[1456]: time="2025-01-17T12:18:31.232282975Z" level=info msg="Ensure that sandbox 070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef in task-service has been cleanup successfully" Jan 17 12:18:31.233884 kubelet[2614]: I0117 12:18:31.233860 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:31.234426 containerd[1456]: time="2025-01-17T12:18:31.234381833Z" level=info msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" Jan 17 12:18:31.234701 containerd[1456]: time="2025-01-17T12:18:31.234674814Z" level=info msg="Ensure that sandbox 8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24 in task-service has been cleanup successfully" Jan 17 12:18:31.235783 kubelet[2614]: I0117 12:18:31.235429 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:31.236048 containerd[1456]: time="2025-01-17T12:18:31.235981156Z" level=info msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" Jan 17 12:18:31.236176 containerd[1456]: time="2025-01-17T12:18:31.236145784Z" level=info msg="Ensure that sandbox 251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d in task-service has been cleanup successfully" Jan 17 12:18:31.237310 kubelet[2614]: I0117 12:18:31.236957 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:31.237422 containerd[1456]: time="2025-01-17T12:18:31.237397863Z" level=info msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" Jan 17 12:18:31.237593 containerd[1456]: time="2025-01-17T12:18:31.237557524Z" level=info msg="Ensure that sandbox 2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f in task-service has been cleanup successfully" Jan 17 12:18:31.239195 kubelet[2614]: I0117 12:18:31.239108 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:31.239963 containerd[1456]: time="2025-01-17T12:18:31.239931269Z" level=info msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" Jan 17 12:18:31.240132 containerd[1456]: time="2025-01-17T12:18:31.240103692Z" level=info msg="Ensure that sandbox 18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071 in task-service has been cleanup successfully" Jan 17 12:18:31.243435 kubelet[2614]: I0117 12:18:31.243373 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:31.244885 containerd[1456]: time="2025-01-17T12:18:31.244821986Z" level=info msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" Jan 17 12:18:31.245152 containerd[1456]: time="2025-01-17T12:18:31.245127851Z" level=info msg="Ensure that sandbox 5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a in task-service has been cleanup successfully" Jan 17 12:18:31.307485 containerd[1456]: time="2025-01-17T12:18:31.307048154Z" level=error msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" failed" error="failed to destroy network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.307641 kubelet[2614]: E0117 12:18:31.307260 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:31.307641 kubelet[2614]: E0117 12:18:31.307322 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f"} Jan 17 12:18:31.307641 kubelet[2614]: E0117 12:18:31.307409 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ea49426-4d71-485c-81af-880c7b039c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.307641 kubelet[2614]: E0117 12:18:31.307438 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ea49426-4d71-485c-81af-880c7b039c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hcv7d" podUID="8ea49426-4d71-485c-81af-880c7b039c97" Jan 17 12:18:31.313103 containerd[1456]: time="2025-01-17T12:18:31.312910816Z" level=error msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" failed" error="failed to destroy network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.313103 containerd[1456]: time="2025-01-17T12:18:31.313028097Z" level=error msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" failed" error="failed to destroy network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.313444 kubelet[2614]: E0117 12:18:31.313380 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:31.313498 kubelet[2614]: E0117 12:18:31.313457 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071"} Jan 17 12:18:31.313528 kubelet[2614]: E0117 12:18:31.313498 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.313594 kubelet[2614]: E0117 12:18:31.313528 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d31fd11e-f0a1-43ba-8772-07b005c2e59d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9b6b" podUID="d31fd11e-f0a1-43ba-8772-07b005c2e59d" Jan 17 12:18:31.313594 kubelet[2614]: E0117 12:18:31.313380 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:31.313594 kubelet[2614]: E0117 12:18:31.313563 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef"} Jan 17 12:18:31.313594 kubelet[2614]: E0117 12:18:31.313586 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0751a22-7602-4c7d-a7ee-e530eb41ad09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.313734 kubelet[2614]: E0117 12:18:31.313607 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0751a22-7602-4c7d-a7ee-e530eb41ad09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" podUID="e0751a22-7602-4c7d-a7ee-e530eb41ad09" Jan 17 12:18:31.315431 containerd[1456]: time="2025-01-17T12:18:31.315364622Z" level=error msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" failed" error="failed to destroy network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.315786 kubelet[2614]: E0117 12:18:31.315748 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:31.315827 kubelet[2614]: E0117 12:18:31.315787 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24"} Jan 17 12:18:31.315827 kubelet[2614]: E0117 12:18:31.315818 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dd97009-378f-4ef4-b765-3bec41555af3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.315906 kubelet[2614]: E0117 12:18:31.315844 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dd97009-378f-4ef4-b765-3bec41555af3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" podUID="0dd97009-378f-4ef4-b765-3bec41555af3" Jan 17 12:18:31.317363 containerd[1456]: time="2025-01-17T12:18:31.317332043Z" level=error msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" failed" error="failed to destroy network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.317490 kubelet[2614]: E0117 12:18:31.317463 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:31.317558 kubelet[2614]: E0117 12:18:31.317491 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d"} Jan 17 12:18:31.317558 kubelet[2614]: E0117 12:18:31.317537 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc03b6ec-75c2-4b0b-bb26-44676fd171af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.317680 kubelet[2614]: E0117 12:18:31.317563 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc03b6ec-75c2-4b0b-bb26-44676fd171af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" podUID="bc03b6ec-75c2-4b0b-bb26-44676fd171af" Jan 17 12:18:31.323527 containerd[1456]: time="2025-01-17T12:18:31.323490761Z" level=error msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" failed" error="failed to destroy network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:31.323747 kubelet[2614]: E0117 12:18:31.323694 2614 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:31.323805 kubelet[2614]: E0117 12:18:31.323743 2614 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a"} Jan 17 12:18:31.323805 kubelet[2614]: E0117 12:18:31.323771 2614 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f642288-757c-4272-856a-d51e252297f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:31.323805 kubelet[2614]: E0117 12:18:31.323792 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f642288-757c-4272-856a-d51e252297f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z4vvv" podUID="8f642288-757c-4272-856a-d51e252297f4" Jan 17 12:18:34.860473 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:53194.service - OpenSSH per-connection server daemon (10.0.0.1:53194). Jan 17 12:18:34.901576 sshd[3793]: Accepted publickey for core from 10.0.0.1 port 53194 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:34.903554 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:34.910030 systemd-logind[1439]: New session 10 of user core. Jan 17 12:18:34.916810 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:18:35.072991 sshd[3793]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:35.078341 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:53194.service: Deactivated successfully. Jan 17 12:18:35.081717 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:18:35.083690 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:18:35.085245 systemd-logind[1439]: Removed session 10. Jan 17 12:18:36.048249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110542636.mount: Deactivated successfully. Jan 17 12:18:37.697698 containerd[1456]: time="2025-01-17T12:18:37.697420994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:37.699400 containerd[1456]: time="2025-01-17T12:18:37.699346871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:18:37.701032 containerd[1456]: time="2025-01-17T12:18:37.700993763Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:37.703190 containerd[1456]: time="2025-01-17T12:18:37.703147849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:37.703812 containerd[1456]: time="2025-01-17T12:18:37.703759662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.474214487s" Jan 17 12:18:37.703812 containerd[1456]: time="2025-01-17T12:18:37.703805638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:18:37.713959 containerd[1456]: time="2025-01-17T12:18:37.713918854Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:18:37.737277 containerd[1456]: time="2025-01-17T12:18:37.737198245Z" level=info msg="CreateContainer within sandbox \"62e0aedd2f7e296a829d0d2351564c73c47ff709ababa04caba0b5afa87972f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba\"" Jan 17 12:18:37.738011 containerd[1456]: time="2025-01-17T12:18:37.737935784Z" level=info msg="StartContainer for \"bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba\"" Jan 17 12:18:37.825913 systemd[1]: Started cri-containerd-bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba.scope - libcontainer container bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba. Jan 17 12:18:37.859798 containerd[1456]: time="2025-01-17T12:18:37.859751023Z" level=info msg="StartContainer for \"bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba\" returns successfully" Jan 17 12:18:37.952085 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:18:37.952250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:18:38.260502 kubelet[2614]: E0117 12:18:38.260461 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:39.262486 kubelet[2614]: E0117 12:18:39.262435 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:40.083890 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). Jan 17 12:18:40.121012 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:40.122728 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:40.126725 systemd-logind[1439]: New session 11 of user core. Jan 17 12:18:40.135804 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:18:40.267527 sshd[4026]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:40.278605 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:48024.service: Deactivated successfully. Jan 17 12:18:40.280555 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:18:40.282262 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:18:40.287890 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). Jan 17 12:18:40.288714 systemd-logind[1439]: Removed session 11. Jan 17 12:18:40.316349 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:40.318328 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:40.322554 systemd-logind[1439]: New session 12 of user core. Jan 17 12:18:40.338917 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:18:40.619722 sshd[4042]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:40.629958 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:48038.service: Deactivated successfully. Jan 17 12:18:40.631995 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:18:40.634138 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:18:40.640227 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:48052.service - OpenSSH per-connection server daemon (10.0.0.1:48052). Jan 17 12:18:40.641404 systemd-logind[1439]: Removed session 12. Jan 17 12:18:40.673952 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48052 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:40.676000 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:40.680666 systemd-logind[1439]: New session 13 of user core. Jan 17 12:18:40.693897 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:18:40.864194 sshd[4079]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:40.868552 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:48052.service: Deactivated successfully. Jan 17 12:18:40.870598 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:18:40.871406 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:18:40.872343 systemd-logind[1439]: Removed session 13. Jan 17 12:18:43.132147 containerd[1456]: time="2025-01-17T12:18:43.131549787Z" level=info msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" Jan 17 12:18:43.132147 containerd[1456]: time="2025-01-17T12:18:43.131726027Z" level=info msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" Jan 17 12:18:43.139805 containerd[1456]: time="2025-01-17T12:18:43.132510722Z" level=info msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" Jan 17 12:18:43.267901 kubelet[2614]: I0117 12:18:43.267825 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bg2c5" podStartSLOduration=6.114492412 podStartE2EDuration="29.267801963s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:14.551297948 +0000 UTC m=+23.513852263" lastFinishedPulling="2025-01-17 12:18:37.704607499 +0000 UTC m=+46.667161814" observedRunningTime="2025-01-17 12:18:38.281988472 +0000 UTC m=+47.244542787" watchObservedRunningTime="2025-01-17 12:18:43.267801963 +0000 UTC m=+52.230356278" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.270 [INFO][4187] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.270 [INFO][4187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" iface="eth0" netns="/var/run/netns/cni-28288046-fed6-3f58-caaf-a665fcd2ec2b" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" iface="eth0" netns="/var/run/netns/cni-28288046-fed6-3f58-caaf-a665fcd2ec2b" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.272 [INFO][4187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" iface="eth0" netns="/var/run/netns/cni-28288046-fed6-3f58-caaf-a665fcd2ec2b" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.272 [INFO][4187] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.272 [INFO][4187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.343 [WARNING][4209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.343 [INFO][4209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.345 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.350583 containerd[1456]: 2025-01-17 12:18:43.348 [INFO][4187] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:43.351619 containerd[1456]: time="2025-01-17T12:18:43.351564483Z" level=info msg="TearDown network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" successfully" Jan 17 12:18:43.351703 containerd[1456]: time="2025-01-17T12:18:43.351617997Z" level=info msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" returns successfully" Jan 17 12:18:43.352996 containerd[1456]: time="2025-01-17T12:18:43.352516281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-4zkzk,Uid:0dd97009-378f-4ef4-b765-3bec41555af3,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:18:43.355297 systemd[1]: run-netns-cni\x2d28288046\x2dfed6\x2d3f58\x2dcaaf\x2da665fcd2ec2b.mount: Deactivated successfully. Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.269 [INFO][4186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.269 [INFO][4186] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" iface="eth0" netns="/var/run/netns/cni-28e0c657-6b79-0bfd-8c32-77d650e2ab24" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4186] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" iface="eth0" netns="/var/run/netns/cni-28e0c657-6b79-0bfd-8c32-77d650e2ab24" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4186] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" iface="eth0" netns="/var/run/netns/cni-28e0c657-6b79-0bfd-8c32-77d650e2ab24" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.345 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.352 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.352 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.354 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.361985 containerd[1456]: 2025-01-17 12:18:43.359 [INFO][4186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:43.363840 containerd[1456]: time="2025-01-17T12:18:43.362206545Z" level=info msg="TearDown network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" successfully" Jan 17 12:18:43.363840 containerd[1456]: time="2025-01-17T12:18:43.362238176Z" level=info msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" returns successfully" Jan 17 12:18:43.363902 kubelet[2614]: E0117 12:18:43.362784 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:43.364512 containerd[1456]: time="2025-01-17T12:18:43.364210733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcv7d,Uid:8ea49426-4d71-485c-81af-880c7b039c97,Namespace:kube-system,Attempt:1,}" Jan 17 12:18:43.366982 systemd[1]: run-netns-cni\x2d28e0c657\x2d6b79\x2d0bfd\x2d8c32\x2d77d650e2ab24.mount: Deactivated successfully. Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.266 [INFO][4185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.269 [INFO][4185] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" iface="eth0" netns="/var/run/netns/cni-a381d3c1-4579-4b3e-a432-bb9e01207402" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.270 [INFO][4185] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" iface="eth0" netns="/var/run/netns/cni-a381d3c1-4579-4b3e-a432-bb9e01207402" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4185] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" iface="eth0" netns="/var/run/netns/cni-a381d3c1-4579-4b3e-a432-bb9e01207402" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.271 [INFO][4185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.335 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.355 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.363 [WARNING][4208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.363 [INFO][4208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.370 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.376512 containerd[1456]: 2025-01-17 12:18:43.374 [INFO][4185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:43.377122 containerd[1456]: time="2025-01-17T12:18:43.377072339Z" level=info msg="TearDown network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" successfully" Jan 17 12:18:43.377122 containerd[1456]: time="2025-01-17T12:18:43.377115072Z" level=info msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" returns successfully" Jan 17 12:18:43.377756 kubelet[2614]: E0117 12:18:43.377718 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:43.378450 containerd[1456]: time="2025-01-17T12:18:43.378362620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4vvv,Uid:8f642288-757c-4272-856a-d51e252297f4,Namespace:kube-system,Attempt:1,}" Jan 17 12:18:43.379922 systemd[1]: run-netns-cni\x2da381d3c1\x2d4579\x2d4b3e\x2da432\x2dbb9e01207402.mount: Deactivated successfully. Jan 17 12:18:43.551699 systemd-networkd[1386]: calic545b7b8aeb: Link UP Jan 17 12:18:43.552546 systemd-networkd[1386]: calic545b7b8aeb: Gained carrier Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.427 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.442 [INFO][4233] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0 calico-apiserver-6d54fccbdb- calico-apiserver 0dd97009-378f-4ef4-b765-3bec41555af3 877 0 2025-01-17 12:18:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d54fccbdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d54fccbdb-4zkzk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic545b7b8aeb [] []}} ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.442 [INFO][4233] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.486 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" HandleID="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.498 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" HandleID="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000281160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d54fccbdb-4zkzk", "timestamp":"2025-01-17 12:18:43.486376039 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.498 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.499 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.499 [INFO][4271] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.502 [INFO][4271] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.509 [INFO][4271] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.518 [INFO][4271] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.523 [INFO][4271] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.527 [INFO][4271] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.527 [INFO][4271] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.529 [INFO][4271] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.534 [INFO][4271] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.540 [INFO][4271] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.540 [INFO][4271] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" host="localhost" Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.540 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.565220 containerd[1456]: 2025-01-17 12:18:43.541 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" HandleID="k8s-pod-network.0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.543 [INFO][4233] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dd97009-378f-4ef4-b765-3bec41555af3", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d54fccbdb-4zkzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic545b7b8aeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.544 [INFO][4233] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.544 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic545b7b8aeb ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.552 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.553 [INFO][4233] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dd97009-378f-4ef4-b765-3bec41555af3", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee", Pod:"calico-apiserver-6d54fccbdb-4zkzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic545b7b8aeb", MAC:"3e:3e:14:94:a2:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.565952 containerd[1456]: 2025-01-17 12:18:43.561 [INFO][4233] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-4zkzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:43.585088 systemd-networkd[1386]: cali02885ca282a: Link UP Jan 17 12:18:43.585886 systemd-networkd[1386]: cali02885ca282a: Gained carrier Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.454 [INFO][4243] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.468 [INFO][4243] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0 coredns-7db6d8ff4d- kube-system 8ea49426-4d71-485c-81af-880c7b039c97 876 0 2025-01-17 12:18:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hcv7d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02885ca282a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.468 [INFO][4243] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.518 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" HandleID="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.528 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" HandleID="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b2c90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hcv7d", "timestamp":"2025-01-17 12:18:43.517995131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.528 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.541 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.541 [INFO][4280] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.543 [INFO][4280] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.549 [INFO][4280] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.556 [INFO][4280] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.558 [INFO][4280] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.562 [INFO][4280] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.562 [INFO][4280] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.564 [INFO][4280] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3 Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.569 [INFO][4280] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.577 [INFO][4280] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.577 [INFO][4280] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" host="localhost" Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.577 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.606781 containerd[1456]: 2025-01-17 12:18:43.577 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" HandleID="k8s-pod-network.c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.580 [INFO][4243] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ea49426-4d71-485c-81af-880c7b039c97", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hcv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02885ca282a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.580 [INFO][4243] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.580 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02885ca282a ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.586 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.586 [INFO][4243] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ea49426-4d71-485c-81af-880c7b039c97", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3", Pod:"coredns-7db6d8ff4d-hcv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02885ca282a", MAC:"1a:84:c7:de:2c:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.608897 containerd[1456]: 2025-01-17 12:18:43.601 [INFO][4243] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hcv7d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:43.608897 containerd[1456]: time="2025-01-17T12:18:43.607359684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:43.608897 containerd[1456]: time="2025-01-17T12:18:43.607438205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:43.608897 containerd[1456]: time="2025-01-17T12:18:43.607466981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.608897 containerd[1456]: time="2025-01-17T12:18:43.607582003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.625349 systemd-networkd[1386]: calicfbda0952ab: Link UP Jan 17 12:18:43.626355 systemd-networkd[1386]: calicfbda0952ab: Gained carrier Jan 17 12:18:43.629956 systemd[1]: Started cri-containerd-0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee.scope - libcontainer container 0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee. Jan 17 12:18:43.645364 containerd[1456]: time="2025-01-17T12:18:43.645041799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:43.645364 containerd[1456]: time="2025-01-17T12:18:43.645103048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:43.645364 containerd[1456]: time="2025-01-17T12:18:43.645117815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.645364 containerd[1456]: time="2025-01-17T12:18:43.645199654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.464 [INFO][4255] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.477 [INFO][4255] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0 coredns-7db6d8ff4d- kube-system 8f642288-757c-4272-856a-d51e252297f4 875 0 2025-01-17 12:18:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-z4vvv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicfbda0952ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.479 [INFO][4255] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.522 [INFO][4286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" HandleID="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.531 [INFO][4286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" HandleID="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000517e40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-z4vvv", "timestamp":"2025-01-17 12:18:43.522356279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.531 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.578 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.578 [INFO][4286] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.580 [INFO][4286] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.584 [INFO][4286] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.589 [INFO][4286] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.592 [INFO][4286] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.601 [INFO][4286] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.601 [INFO][4286] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.603 [INFO][4286] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653 Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.609 [INFO][4286] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.617 [INFO][4286] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.617 [INFO][4286] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" host="localhost" Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.617 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:43.646081 containerd[1456]: 2025-01-17 12:18:43.617 [INFO][4286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" HandleID="k8s-pod-network.306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.621 [INFO][4255] cni-plugin/k8s.go 386: Populated endpoint ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f642288-757c-4272-856a-d51e252297f4", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-z4vvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfbda0952ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.622 [INFO][4255] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.622 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfbda0952ab ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.627 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.628 [INFO][4255] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f642288-757c-4272-856a-d51e252297f4", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653", Pod:"coredns-7db6d8ff4d-z4vvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfbda0952ab", MAC:"ce:8f:5d:ad:d2:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:43.646736 containerd[1456]: 2025-01-17 12:18:43.642 [INFO][4255] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z4vvv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:43.656302 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:43.673180 containerd[1456]: time="2025-01-17T12:18:43.673066383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:43.673180 containerd[1456]: time="2025-01-17T12:18:43.673143792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:43.673180 containerd[1456]: time="2025-01-17T12:18:43.673159573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.673427 containerd[1456]: time="2025-01-17T12:18:43.673377684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:43.674851 systemd[1]: Started cri-containerd-c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3.scope - libcontainer container c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3. Jan 17 12:18:43.696174 systemd[1]: Started cri-containerd-306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653.scope - libcontainer container 306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653. Jan 17 12:18:43.696486 kubelet[2614]: I0117 12:18:43.696225 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:43.696944 kubelet[2614]: E0117 12:18:43.696880 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:43.699361 containerd[1456]: time="2025-01-17T12:18:43.699321220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-4zkzk,Uid:0dd97009-378f-4ef4-b765-3bec41555af3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee\"" Jan 17 12:18:43.705517 containerd[1456]: time="2025-01-17T12:18:43.705454158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:18:43.710199 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:43.714888 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:43.766454 containerd[1456]: time="2025-01-17T12:18:43.765412330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcv7d,Uid:8ea49426-4d71-485c-81af-880c7b039c97,Namespace:kube-system,Attempt:1,} returns sandbox id \"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3\"" Jan 17 12:18:43.766814 kubelet[2614]: E0117 12:18:43.766772 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:43.770522 containerd[1456]: time="2025-01-17T12:18:43.770471436Z" level=info msg="CreateContainer within sandbox \"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:18:43.773675 containerd[1456]: time="2025-01-17T12:18:43.772558504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4vvv,Uid:8f642288-757c-4272-856a-d51e252297f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653\"" Jan 17 12:18:43.775699 kubelet[2614]: E0117 12:18:43.775670 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:43.788920 containerd[1456]: time="2025-01-17T12:18:43.788861482Z" level=info msg="CreateContainer within sandbox \"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:18:43.888187 containerd[1456]: time="2025-01-17T12:18:43.886579399Z" level=info msg="CreateContainer within sandbox \"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ab2db82381181ff86f2284439f49427bf8d44d1214b6f4a05adc410cb5634aa\"" Jan 17 12:18:43.888187 containerd[1456]: time="2025-01-17T12:18:43.887370056Z" level=info msg="StartContainer for \"6ab2db82381181ff86f2284439f49427bf8d44d1214b6f4a05adc410cb5634aa\"" Jan 17 12:18:43.896798 containerd[1456]: time="2025-01-17T12:18:43.896708611Z" level=info msg="CreateContainer within sandbox \"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28961ae59ab3886a0b9b33c08312728c710b6080621e7333c8ab761208f9dc90\"" Jan 17 12:18:43.899144 containerd[1456]: time="2025-01-17T12:18:43.898850275Z" level=info msg="StartContainer for \"28961ae59ab3886a0b9b33c08312728c710b6080621e7333c8ab761208f9dc90\"" Jan 17 12:18:43.929914 systemd[1]: Started cri-containerd-6ab2db82381181ff86f2284439f49427bf8d44d1214b6f4a05adc410cb5634aa.scope - libcontainer container 6ab2db82381181ff86f2284439f49427bf8d44d1214b6f4a05adc410cb5634aa. Jan 17 12:18:43.933551 systemd[1]: Started cri-containerd-28961ae59ab3886a0b9b33c08312728c710b6080621e7333c8ab761208f9dc90.scope - libcontainer container 28961ae59ab3886a0b9b33c08312728c710b6080621e7333c8ab761208f9dc90. Jan 17 12:18:43.979605 containerd[1456]: time="2025-01-17T12:18:43.979427747Z" level=info msg="StartContainer for \"28961ae59ab3886a0b9b33c08312728c710b6080621e7333c8ab761208f9dc90\" returns successfully" Jan 17 12:18:43.979605 containerd[1456]: time="2025-01-17T12:18:43.979471201Z" level=info msg="StartContainer for \"6ab2db82381181ff86f2284439f49427bf8d44d1214b6f4a05adc410cb5634aa\" returns successfully" Jan 17 12:18:44.372914 kubelet[2614]: E0117 12:18:44.370494 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:44.383209 kubelet[2614]: E0117 12:18:44.383131 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:44.384301 kubelet[2614]: E0117 12:18:44.384273 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:44.842616 kubelet[2614]: I0117 12:18:44.841908 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z4vvv" podStartSLOduration=39.841886496 podStartE2EDuration="39.841886496s" podCreationTimestamp="2025-01-17 12:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:44.500845232 +0000 UTC m=+53.463399547" watchObservedRunningTime="2025-01-17 12:18:44.841886496 +0000 UTC m=+53.804440811" Jan 17 12:18:44.842902 kubelet[2614]: I0117 12:18:44.842849 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hcv7d" podStartSLOduration=39.842838783 podStartE2EDuration="39.842838783s" podCreationTimestamp="2025-01-17 12:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:44.841625082 +0000 UTC m=+53.804179427" watchObservedRunningTime="2025-01-17 12:18:44.842838783 +0000 UTC m=+53.805393098" Jan 17 12:18:45.092839 systemd-networkd[1386]: calic545b7b8aeb: Gained IPv6LL Jan 17 12:18:45.133418 containerd[1456]: time="2025-01-17T12:18:45.132645590Z" level=info msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" Jan 17 12:18:45.149746 kernel: bpftool[4617]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:18:45.220862 systemd-networkd[1386]: calicfbda0952ab: Gained IPv6LL Jan 17 12:18:45.285748 systemd-networkd[1386]: cali02885ca282a: Gained IPv6LL Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.268 [INFO][4609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.268 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" iface="eth0" netns="/var/run/netns/cni-bb30d030-e496-5316-a213-5ac8e076a5c8" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.269 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" iface="eth0" netns="/var/run/netns/cni-bb30d030-e496-5316-a213-5ac8e076a5c8" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.269 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" iface="eth0" netns="/var/run/netns/cni-bb30d030-e496-5316-a213-5ac8e076a5c8" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.269 [INFO][4609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.269 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.300 [INFO][4621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.300 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.300 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.306 [WARNING][4621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.306 [INFO][4621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.307 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:45.314223 containerd[1456]: 2025-01-17 12:18:45.310 [INFO][4609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:45.315252 containerd[1456]: time="2025-01-17T12:18:45.314923780Z" level=info msg="TearDown network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" successfully" Jan 17 12:18:45.315252 containerd[1456]: time="2025-01-17T12:18:45.314955170Z" level=info msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" returns successfully" Jan 17 12:18:45.315768 containerd[1456]: time="2025-01-17T12:18:45.315739582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-hj6qq,Uid:e0751a22-7602-4c7d-a7ee-e530eb41ad09,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:18:45.319449 systemd[1]: run-netns-cni\x2dbb30d030\x2de496\x2d5316\x2da213\x2d5ac8e076a5c8.mount: Deactivated successfully. Jan 17 12:18:45.385766 kubelet[2614]: E0117 12:18:45.385712 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:45.386567 kubelet[2614]: E0117 12:18:45.386171 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:45.462188 systemd-networkd[1386]: vxlan.calico: Link UP Jan 17 12:18:45.462201 systemd-networkd[1386]: vxlan.calico: Gained carrier Jan 17 12:18:45.885043 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). Jan 17 12:18:45.920638 sshd[4706]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:45.922710 sshd[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:45.927250 systemd-logind[1439]: New session 14 of user core. Jan 17 12:18:45.933816 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:18:46.132084 containerd[1456]: time="2025-01-17T12:18:46.132023353Z" level=info msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" Jan 17 12:18:46.387677 kubelet[2614]: E0117 12:18:46.387623 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:46.388198 kubelet[2614]: E0117 12:18:46.387681 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:46.396970 sshd[4706]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:46.401284 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:48062.service: Deactivated successfully. Jan 17 12:18:46.403495 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:18:46.404262 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:18:46.405249 systemd-logind[1439]: Removed session 14. Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.606 [INFO][4734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.607 [INFO][4734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" iface="eth0" netns="/var/run/netns/cni-d19168e0-fdef-6b9d-3fba-eb07b519e0c4" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.607 [INFO][4734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" iface="eth0" netns="/var/run/netns/cni-d19168e0-fdef-6b9d-3fba-eb07b519e0c4" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.607 [INFO][4734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" iface="eth0" netns="/var/run/netns/cni-d19168e0-fdef-6b9d-3fba-eb07b519e0c4" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.607 [INFO][4734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.607 [INFO][4734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.635 [INFO][4745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.635 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.636 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.642 [WARNING][4745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.642 [INFO][4745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.645 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:46.651426 containerd[1456]: 2025-01-17 12:18:46.648 [INFO][4734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:46.654858 containerd[1456]: time="2025-01-17T12:18:46.654801035Z" level=info msg="TearDown network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" successfully" Jan 17 12:18:46.654951 containerd[1456]: time="2025-01-17T12:18:46.654931105Z" level=info msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" returns successfully" Jan 17 12:18:46.655090 systemd[1]: run-netns-cni\x2dd19168e0\x2dfdef\x2d6b9d\x2d3fba\x2deb07b519e0c4.mount: Deactivated successfully. Jan 17 12:18:46.656028 containerd[1456]: time="2025-01-17T12:18:46.655720515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f856786c-xcmdb,Uid:bc03b6ec-75c2-4b0b-bb26-44676fd171af,Namespace:calico-system,Attempt:1,}" Jan 17 12:18:46.806366 systemd-networkd[1386]: calid08af2093ab: Link UP Jan 17 12:18:46.810950 systemd-networkd[1386]: calid08af2093ab: Gained carrier Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.703 [INFO][4754] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0 calico-apiserver-6d54fccbdb- calico-apiserver e0751a22-7602-4c7d-a7ee-e530eb41ad09 921 0 2025-01-17 12:18:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d54fccbdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d54fccbdb-hj6qq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid08af2093ab [] []}} ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.703 [INFO][4754] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.743 [INFO][4780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" HandleID="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.758 [INFO][4780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" HandleID="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e0e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d54fccbdb-hj6qq", "timestamp":"2025-01-17 12:18:46.743007008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.760 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.760 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.760 [INFO][4780] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.762 [INFO][4780] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.768 [INFO][4780] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.772 [INFO][4780] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.775 [INFO][4780] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.780 [INFO][4780] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.780 [INFO][4780] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.784 [INFO][4780] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6 Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.789 [INFO][4780] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4780] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4780] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" host="localhost" Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:46.834977 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" HandleID="k8s-pod-network.86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.801 [INFO][4754] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0751a22-7602-4c7d-a7ee-e530eb41ad09", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d54fccbdb-hj6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid08af2093ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.802 [INFO][4754] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.802 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid08af2093ab ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.809 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.810 [INFO][4754] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0751a22-7602-4c7d-a7ee-e530eb41ad09", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6", Pod:"calico-apiserver-6d54fccbdb-hj6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid08af2093ab", MAC:"d6:1c:e6:48:51:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:46.836005 containerd[1456]: 2025-01-17 12:18:46.826 [INFO][4754] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fccbdb-hj6qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:46.860827 systemd-networkd[1386]: calif794bf9de38: Link UP Jan 17 12:18:46.862257 systemd-networkd[1386]: calif794bf9de38: Gained carrier Jan 17 12:18:46.883380 containerd[1456]: time="2025-01-17T12:18:46.881406638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:46.883539 containerd[1456]: time="2025-01-17T12:18:46.883397733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:46.883539 containerd[1456]: time="2025-01-17T12:18:46.883425556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:46.884339 containerd[1456]: time="2025-01-17T12:18:46.884109694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.733 [INFO][4768] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0 calico-kube-controllers-67f856786c- calico-system bc03b6ec-75c2-4b0b-bb26-44676fd171af 939 0 2025-01-17 12:18:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67f856786c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67f856786c-xcmdb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif794bf9de38 [] []}} ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.733 [INFO][4768] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.781 [INFO][4789] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" HandleID="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.791 [INFO][4789] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" HandleID="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003750d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67f856786c-xcmdb", "timestamp":"2025-01-17 12:18:46.781153784 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.791 [INFO][4789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.797 [INFO][4789] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.800 [INFO][4789] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.806 [INFO][4789] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.815 [INFO][4789] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.821 [INFO][4789] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.825 [INFO][4789] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.825 [INFO][4789] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.831 [INFO][4789] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986 Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.838 [INFO][4789] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.849 [INFO][4789] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.849 [INFO][4789] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" host="localhost" Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.849 [INFO][4789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:46.885540 containerd[1456]: 2025-01-17 12:18:46.849 [INFO][4789] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" HandleID="k8s-pod-network.bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.854 [INFO][4768] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0", GenerateName:"calico-kube-controllers-67f856786c-", Namespace:"calico-system", SelfLink:"", UID:"bc03b6ec-75c2-4b0b-bb26-44676fd171af", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f856786c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67f856786c-xcmdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif794bf9de38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.855 [INFO][4768] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.855 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif794bf9de38 ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.862 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.863 [INFO][4768] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0", GenerateName:"calico-kube-controllers-67f856786c-", Namespace:"calico-system", SelfLink:"", UID:"bc03b6ec-75c2-4b0b-bb26-44676fd171af", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f856786c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986", Pod:"calico-kube-controllers-67f856786c-xcmdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif794bf9de38", MAC:"be:b5:cb:03:40:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:46.886137 containerd[1456]: 2025-01-17 12:18:46.880 [INFO][4768] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986" Namespace="calico-system" Pod="calico-kube-controllers-67f856786c-xcmdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:46.920350 systemd[1]: Started cri-containerd-86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6.scope - libcontainer container 86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6. Jan 17 12:18:46.938256 containerd[1456]: time="2025-01-17T12:18:46.937790747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:46.938803 containerd[1456]: time="2025-01-17T12:18:46.938503810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:46.939115 containerd[1456]: time="2025-01-17T12:18:46.939057526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:46.939590 containerd[1456]: time="2025-01-17T12:18:46.939483667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:46.957100 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:46.976202 systemd[1]: Started cri-containerd-bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986.scope - libcontainer container bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986. Jan 17 12:18:46.991243 containerd[1456]: time="2025-01-17T12:18:46.990689763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fccbdb-hj6qq,Uid:e0751a22-7602-4c7d-a7ee-e530eb41ad09,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6\"" Jan 17 12:18:46.995682 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:47.022762 containerd[1456]: time="2025-01-17T12:18:47.022684243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f856786c-xcmdb,Uid:bc03b6ec-75c2-4b0b-bb26-44676fd171af,Namespace:calico-system,Attempt:1,} returns sandbox id \"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986\"" Jan 17 12:18:47.151707 containerd[1456]: time="2025-01-17T12:18:47.133345681Z" level=info msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" Jan 17 12:18:47.157765 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.325 [INFO][4927] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.326 [INFO][4927] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" iface="eth0" netns="/var/run/netns/cni-034c013d-2055-2725-98d1-f5507141b98a" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.326 [INFO][4927] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" iface="eth0" netns="/var/run/netns/cni-034c013d-2055-2725-98d1-f5507141b98a" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.326 [INFO][4927] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" iface="eth0" netns="/var/run/netns/cni-034c013d-2055-2725-98d1-f5507141b98a" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.326 [INFO][4927] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.326 [INFO][4927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.744 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.744 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.744 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.750 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.750 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.752 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:47.757478 containerd[1456]: 2025-01-17 12:18:47.755 [INFO][4927] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:47.759240 containerd[1456]: time="2025-01-17T12:18:47.758759748Z" level=info msg="TearDown network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" successfully" Jan 17 12:18:47.759240 containerd[1456]: time="2025-01-17T12:18:47.758788564Z" level=info msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" returns successfully" Jan 17 12:18:47.759551 containerd[1456]: time="2025-01-17T12:18:47.759488000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9b6b,Uid:d31fd11e-f0a1-43ba-8772-07b005c2e59d,Namespace:calico-system,Attempt:1,}" Jan 17 12:18:47.762265 systemd[1]: run-netns-cni\x2d034c013d\x2d2055\x2d2725\x2d98d1\x2df5507141b98a.mount: Deactivated successfully. Jan 17 12:18:48.164996 systemd-networkd[1386]: calif794bf9de38: Gained IPv6LL Jan 17 12:18:48.623885 systemd-networkd[1386]: cali33e66373d38: Link UP Jan 17 12:18:48.624827 systemd-networkd[1386]: cali33e66373d38: Gained carrier Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.440 [INFO][4946] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b9b6b-eth0 csi-node-driver- calico-system d31fd11e-f0a1-43ba-8772-07b005c2e59d 950 0 2025-01-17 12:18:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b9b6b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali33e66373d38 [] []}} ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.440 [INFO][4946] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.476 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" HandleID="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.484 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" HandleID="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b9b6b", "timestamp":"2025-01-17 12:18:48.476729414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.485 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.485 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.485 [INFO][4960] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.486 [INFO][4960] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.490 [INFO][4960] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.494 [INFO][4960] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.495 [INFO][4960] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.497 [INFO][4960] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.497 [INFO][4960] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.499 [INFO][4960] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1 Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.559 [INFO][4960] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.617 [INFO][4960] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.617 [INFO][4960] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" host="localhost" Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.617 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:48.798156 containerd[1456]: 2025-01-17 12:18:48.617 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" HandleID="k8s-pod-network.c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.620 [INFO][4946] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9b6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d31fd11e-f0a1-43ba-8772-07b005c2e59d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b9b6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33e66373d38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.620 [INFO][4946] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.620 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33e66373d38 ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.624 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.624 [INFO][4946] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9b6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d31fd11e-f0a1-43ba-8772-07b005c2e59d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1", Pod:"csi-node-driver-b9b6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33e66373d38", MAC:"6e:d7:88:89:46:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:48.805592 containerd[1456]: 2025-01-17 12:18:48.787 [INFO][4946] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1" Namespace="calico-system" Pod="csi-node-driver-b9b6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:48.806352 systemd-networkd[1386]: calid08af2093ab: Gained IPv6LL Jan 17 12:18:48.881908 kubelet[2614]: E0117 12:18:48.880689 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:18:48.882488 containerd[1456]: time="2025-01-17T12:18:48.882226575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:48.882488 containerd[1456]: time="2025-01-17T12:18:48.882295226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:48.882488 containerd[1456]: time="2025-01-17T12:18:48.882313421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:48.882488 containerd[1456]: time="2025-01-17T12:18:48.882422020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:48.908870 systemd[1]: Started cri-containerd-c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1.scope - libcontainer container c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1. Jan 17 12:18:48.921276 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:18:48.936414 containerd[1456]: time="2025-01-17T12:18:48.936351838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9b6b,Uid:d31fd11e-f0a1-43ba-8772-07b005c2e59d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1\"" Jan 17 12:18:50.383475 containerd[1456]: time="2025-01-17T12:18:50.383413935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.388613 containerd[1456]: time="2025-01-17T12:18:50.388565329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:18:50.398020 containerd[1456]: time="2025-01-17T12:18:50.397983018Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.417850 containerd[1456]: time="2025-01-17T12:18:50.417788628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.418903 containerd[1456]: time="2025-01-17T12:18:50.418703035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 6.713168631s" Jan 17 12:18:50.418903 containerd[1456]: time="2025-01-17T12:18:50.418761928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:18:50.419850 containerd[1456]: time="2025-01-17T12:18:50.419810131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:18:50.421025 containerd[1456]: time="2025-01-17T12:18:50.420972974Z" level=info msg="CreateContainer within sandbox \"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:18:50.468805 systemd-networkd[1386]: cali33e66373d38: Gained IPv6LL Jan 17 12:18:50.614266 containerd[1456]: time="2025-01-17T12:18:50.614203259Z" level=info msg="CreateContainer within sandbox \"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95231c7d76bea91832106e5116e35c89288aec4c8d87f5390a54ce995ffa7b0c\"" Jan 17 12:18:50.614775 containerd[1456]: time="2025-01-17T12:18:50.614752313Z" level=info msg="StartContainer for \"95231c7d76bea91832106e5116e35c89288aec4c8d87f5390a54ce995ffa7b0c\"" Jan 17 12:18:50.657835 systemd[1]: Started cri-containerd-95231c7d76bea91832106e5116e35c89288aec4c8d87f5390a54ce995ffa7b0c.scope - libcontainer container 95231c7d76bea91832106e5116e35c89288aec4c8d87f5390a54ce995ffa7b0c. Jan 17 12:18:50.702116 containerd[1456]: time="2025-01-17T12:18:50.702052900Z" level=info msg="StartContainer for \"95231c7d76bea91832106e5116e35c89288aec4c8d87f5390a54ce995ffa7b0c\" returns successfully" Jan 17 12:18:50.804569 containerd[1456]: time="2025-01-17T12:18:50.804496082Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.805814 containerd[1456]: time="2025-01-17T12:18:50.805231474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:18:50.807365 containerd[1456]: time="2025-01-17T12:18:50.807314935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 387.470228ms" Jan 17 12:18:50.807365 containerd[1456]: time="2025-01-17T12:18:50.807357506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:18:50.808618 containerd[1456]: time="2025-01-17T12:18:50.808356586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:18:50.809523 containerd[1456]: time="2025-01-17T12:18:50.809416492Z" level=info msg="CreateContainer within sandbox \"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:18:50.826401 containerd[1456]: time="2025-01-17T12:18:50.826350326Z" level=info msg="CreateContainer within sandbox \"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b73bddf49f8038f5141cb795b7f4b73e701d3a28d0444111b430e47ac9b21b91\"" Jan 17 12:18:50.828420 containerd[1456]: time="2025-01-17T12:18:50.827126938Z" level=info msg="StartContainer for \"b73bddf49f8038f5141cb795b7f4b73e701d3a28d0444111b430e47ac9b21b91\"" Jan 17 12:18:50.864856 systemd[1]: Started cri-containerd-b73bddf49f8038f5141cb795b7f4b73e701d3a28d0444111b430e47ac9b21b91.scope - libcontainer container b73bddf49f8038f5141cb795b7f4b73e701d3a28d0444111b430e47ac9b21b91. Jan 17 12:18:50.926139 containerd[1456]: time="2025-01-17T12:18:50.925931620Z" level=info msg="StartContainer for \"b73bddf49f8038f5141cb795b7f4b73e701d3a28d0444111b430e47ac9b21b91\" returns successfully" Jan 17 12:18:51.127223 containerd[1456]: time="2025-01-17T12:18:51.126919103Z" level=info msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.187 [WARNING][5152] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ea49426-4d71-485c-81af-880c7b039c97", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3", Pod:"coredns-7db6d8ff4d-hcv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02885ca282a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.188 [INFO][5152] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.188 [INFO][5152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" iface="eth0" netns="" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.188 [INFO][5152] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.188 [INFO][5152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.217 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.217 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.217 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.269 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.269 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.271 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.276179 containerd[1456]: 2025-01-17 12:18:51.273 [INFO][5152] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.276179 containerd[1456]: time="2025-01-17T12:18:51.276146944Z" level=info msg="TearDown network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" successfully" Jan 17 12:18:51.276179 containerd[1456]: time="2025-01-17T12:18:51.276171891Z" level=info msg="StopPodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" returns successfully" Jan 17 12:18:51.293401 containerd[1456]: time="2025-01-17T12:18:51.293352920Z" level=info msg="RemovePodSandbox for \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" Jan 17 12:18:51.295565 containerd[1456]: time="2025-01-17T12:18:51.295532053Z" level=info msg="Forcibly stopping sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\"" Jan 17 12:18:51.414982 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Jan 17 12:18:51.425298 kubelet[2614]: I0117 12:18:51.424346 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d54fccbdb-hj6qq" podStartSLOduration=33.608566261 podStartE2EDuration="37.424323857s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:46.992428963 +0000 UTC m=+55.954983278" lastFinishedPulling="2025-01-17 12:18:50.808186559 +0000 UTC m=+59.770740874" observedRunningTime="2025-01-17 12:18:51.419362693 +0000 UTC m=+60.381917009" watchObservedRunningTime="2025-01-17 12:18:51.424323857 +0000 UTC m=+60.386878172" Jan 17 12:18:51.447290 kubelet[2614]: I0117 12:18:51.446909 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d54fccbdb-4zkzk" podStartSLOduration=30.732194092 podStartE2EDuration="37.446889641s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:43.70494186 +0000 UTC m=+52.667496175" lastFinishedPulling="2025-01-17 12:18:50.419637409 +0000 UTC m=+59.382191724" observedRunningTime="2025-01-17 12:18:51.446818394 +0000 UTC m=+60.409372720" watchObservedRunningTime="2025-01-17 12:18:51.446889641 +0000 UTC m=+60.409443956" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.372 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ea49426-4d71-485c-81af-880c7b039c97", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62152e2cfe357e9455b4f4adc6bc1d5557683faca9746bc5061c6353a6b22f3", Pod:"coredns-7db6d8ff4d-hcv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02885ca282a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.373 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.373 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" iface="eth0" netns="" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.373 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.373 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.411 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.412 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.412 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.429 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.429 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" HandleID="k8s-pod-network.2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Workload="localhost-k8s-coredns--7db6d8ff4d--hcv7d-eth0" Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.433 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.454025 containerd[1456]: 2025-01-17 12:18:51.443 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f" Jan 17 12:18:51.455722 containerd[1456]: time="2025-01-17T12:18:51.454060256Z" level=info msg="TearDown network for sandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" successfully" Jan 17 12:18:51.460989 containerd[1456]: time="2025-01-17T12:18:51.460948738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:51.461082 containerd[1456]: time="2025-01-17T12:18:51.461011148Z" level=info msg="RemovePodSandbox \"2daf5f912865c55fb269a1f026c0efaa46890617a583ac56cc71f3d32035224f\" returns successfully" Jan 17 12:18:51.461957 containerd[1456]: time="2025-01-17T12:18:51.461907207Z" level=info msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" Jan 17 12:18:51.469339 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:51.471963 sshd[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:51.485969 systemd-logind[1439]: New session 15 of user core. Jan 17 12:18:51.495875 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.526 [WARNING][5222] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f642288-757c-4272-856a-d51e252297f4", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653", Pod:"coredns-7db6d8ff4d-z4vvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfbda0952ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.526 [INFO][5222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.526 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" iface="eth0" netns="" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.526 [INFO][5222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.526 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.548 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.548 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.548 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.558 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.558 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.560 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.569846 containerd[1456]: 2025-01-17 12:18:51.563 [INFO][5222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.569846 containerd[1456]: time="2025-01-17T12:18:51.569777013Z" level=info msg="TearDown network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" successfully" Jan 17 12:18:51.569846 containerd[1456]: time="2025-01-17T12:18:51.569800397Z" level=info msg="StopPodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" returns successfully" Jan 17 12:18:51.575600 containerd[1456]: time="2025-01-17T12:18:51.575557357Z" level=info msg="RemovePodSandbox for \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" Jan 17 12:18:51.575778 containerd[1456]: time="2025-01-17T12:18:51.575762160Z" level=info msg="Forcibly stopping sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\"" Jan 17 12:18:51.644614 sshd[5202]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:51.648808 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:18:51.649691 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:46074.service: Deactivated successfully. Jan 17 12:18:51.652394 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:18:51.656383 systemd-logind[1439]: Removed session 15. Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.623 [WARNING][5264] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f642288-757c-4272-856a-d51e252297f4", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"306fee9e7394e631e8884b4156b3fd498ac1ef94c59622a28f3bdbd1c1784653", Pod:"coredns-7db6d8ff4d-z4vvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfbda0952ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.623 [INFO][5264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.623 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" iface="eth0" netns="" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.623 [INFO][5264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.623 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.651 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.651 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.651 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.657 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.657 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" HandleID="k8s-pod-network.5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Workload="localhost-k8s-coredns--7db6d8ff4d--z4vvv-eth0" Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.661 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.673767 containerd[1456]: 2025-01-17 12:18:51.667 [INFO][5264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a" Jan 17 12:18:51.673767 containerd[1456]: time="2025-01-17T12:18:51.671396088Z" level=info msg="TearDown network for sandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" successfully" Jan 17 12:18:51.721960 containerd[1456]: time="2025-01-17T12:18:51.721886264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:51.722122 containerd[1456]: time="2025-01-17T12:18:51.721968551Z" level=info msg="RemovePodSandbox \"5caa3445a869f7216e06187668995bf3bf12e2aea9d43b9acff8c208a8973e9a\" returns successfully" Jan 17 12:18:51.722506 containerd[1456]: time="2025-01-17T12:18:51.722487177Z" level=info msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.760 [WARNING][5296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9b6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d31fd11e-f0a1-43ba-8772-07b005c2e59d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1", Pod:"csi-node-driver-b9b6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33e66373d38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.761 [INFO][5296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.761 [INFO][5296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" iface="eth0" netns="" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.761 [INFO][5296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.761 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.794 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.794 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.794 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.799 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.799 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.801 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.806832 containerd[1456]: 2025-01-17 12:18:51.803 [INFO][5296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.806832 containerd[1456]: time="2025-01-17T12:18:51.806858137Z" level=info msg="TearDown network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" successfully" Jan 17 12:18:51.806832 containerd[1456]: time="2025-01-17T12:18:51.806878615Z" level=info msg="StopPodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" returns successfully" Jan 17 12:18:51.807582 containerd[1456]: time="2025-01-17T12:18:51.807428010Z" level=info msg="RemovePodSandbox for \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" Jan 17 12:18:51.807582 containerd[1456]: time="2025-01-17T12:18:51.807470492Z" level=info msg="Forcibly stopping sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\"" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.857 [WARNING][5325] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9b6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d31fd11e-f0a1-43ba-8772-07b005c2e59d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1", Pod:"csi-node-driver-b9b6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33e66373d38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.857 [INFO][5325] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.857 [INFO][5325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" iface="eth0" netns="" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.857 [INFO][5325] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.857 [INFO][5325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.895 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.895 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.895 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.900 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.900 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" HandleID="k8s-pod-network.18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Workload="localhost-k8s-csi--node--driver--b9b6b-eth0" Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.901 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.906648 containerd[1456]: 2025-01-17 12:18:51.903 [INFO][5325] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071" Jan 17 12:18:51.907065 containerd[1456]: time="2025-01-17T12:18:51.906699571Z" level=info msg="TearDown network for sandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" successfully" Jan 17 12:18:51.910439 containerd[1456]: time="2025-01-17T12:18:51.910402959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:51.910498 containerd[1456]: time="2025-01-17T12:18:51.910459548Z" level=info msg="RemovePodSandbox \"18f2e76c53a5efcecf87d55b50813162c95910dc8a28f6600ff6c75fdaf3e071\" returns successfully" Jan 17 12:18:51.910941 containerd[1456]: time="2025-01-17T12:18:51.910908109Z" level=info msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.946 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0", GenerateName:"calico-kube-controllers-67f856786c-", Namespace:"calico-system", SelfLink:"", UID:"bc03b6ec-75c2-4b0b-bb26-44676fd171af", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f856786c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986", Pod:"calico-kube-controllers-67f856786c-xcmdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif794bf9de38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.946 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.946 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" iface="eth0" netns="" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.946 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.946 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.966 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.966 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.966 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.971 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.971 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.973 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:51.978233 containerd[1456]: 2025-01-17 12:18:51.975 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:51.978680 containerd[1456]: time="2025-01-17T12:18:51.978279677Z" level=info msg="TearDown network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" successfully" Jan 17 12:18:51.978680 containerd[1456]: time="2025-01-17T12:18:51.978304616Z" level=info msg="StopPodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" returns successfully" Jan 17 12:18:51.978813 containerd[1456]: time="2025-01-17T12:18:51.978770820Z" level=info msg="RemovePodSandbox for \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" Jan 17 12:18:51.978813 containerd[1456]: time="2025-01-17T12:18:51.978800156Z" level=info msg="Forcibly stopping sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\"" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.013 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0", GenerateName:"calico-kube-controllers-67f856786c-", Namespace:"calico-system", SelfLink:"", UID:"bc03b6ec-75c2-4b0b-bb26-44676fd171af", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f856786c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986", Pod:"calico-kube-controllers-67f856786c-xcmdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif794bf9de38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.013 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.013 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" iface="eth0" netns="" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.013 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.013 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.033 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.033 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.033 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.038 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.038 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" HandleID="k8s-pod-network.251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Workload="localhost-k8s-calico--kube--controllers--67f856786c--xcmdb-eth0" Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.039 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:52.045137 containerd[1456]: 2025-01-17 12:18:52.042 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d" Jan 17 12:18:52.045592 containerd[1456]: time="2025-01-17T12:18:52.045155383Z" level=info msg="TearDown network for sandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" successfully" Jan 17 12:18:52.049226 containerd[1456]: time="2025-01-17T12:18:52.049197898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:52.049286 containerd[1456]: time="2025-01-17T12:18:52.049251461Z" level=info msg="RemovePodSandbox \"251280f7aa8f3823dc042aee9c5f3b330715accd20df7f206407071322ded59d\" returns successfully" Jan 17 12:18:52.049859 containerd[1456]: time="2025-01-17T12:18:52.049807978Z" level=info msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.084 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dd97009-378f-4ef4-b765-3bec41555af3", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee", Pod:"calico-apiserver-6d54fccbdb-4zkzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic545b7b8aeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.084 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.084 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" iface="eth0" netns="" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.084 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.084 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.104 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.105 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.105 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.109 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.109 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.111 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:52.116270 containerd[1456]: 2025-01-17 12:18:52.114 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.116270 containerd[1456]: time="2025-01-17T12:18:52.116270304Z" level=info msg="TearDown network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" successfully" Jan 17 12:18:52.116754 containerd[1456]: time="2025-01-17T12:18:52.116293820Z" level=info msg="StopPodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" returns successfully" Jan 17 12:18:52.116754 containerd[1456]: time="2025-01-17T12:18:52.116709277Z" level=info msg="RemovePodSandbox for \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" Jan 17 12:18:52.116754 containerd[1456]: time="2025-01-17T12:18:52.116735978Z" level=info msg="Forcibly stopping sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\"" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.151 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dd97009-378f-4ef4-b765-3bec41555af3", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f5c770b9c786b89e171d87408eda558c754cf5e72fadd6f8413fa098e4933ee", Pod:"calico-apiserver-6d54fccbdb-4zkzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic545b7b8aeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.152 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.152 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" iface="eth0" netns="" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.152 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.152 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.171 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.171 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.171 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.176 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.176 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" HandleID="k8s-pod-network.8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--4zkzk-eth0" Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.177 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:52.183303 containerd[1456]: 2025-01-17 12:18:52.180 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24" Jan 17 12:18:52.183303 containerd[1456]: time="2025-01-17T12:18:52.183256536Z" level=info msg="TearDown network for sandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" successfully" Jan 17 12:18:52.187229 containerd[1456]: time="2025-01-17T12:18:52.187195131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:52.187297 containerd[1456]: time="2025-01-17T12:18:52.187258804Z" level=info msg="RemovePodSandbox \"8a9f255da04105f7cf6a861e15211f291625f7b6e00aeb0cd9b448835902ec24\" returns successfully" Jan 17 12:18:52.187788 containerd[1456]: time="2025-01-17T12:18:52.187758553Z" level=info msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.228 [WARNING][5477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0751a22-7602-4c7d-a7ee-e530eb41ad09", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6", Pod:"calico-apiserver-6d54fccbdb-hj6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid08af2093ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.228 [INFO][5477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.228 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" iface="eth0" netns="" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.228 [INFO][5477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.228 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.252 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.252 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.252 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.258 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.258 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.260 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:52.265492 containerd[1456]: 2025-01-17 12:18:52.262 [INFO][5477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.266150 containerd[1456]: time="2025-01-17T12:18:52.265542427Z" level=info msg="TearDown network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" successfully" Jan 17 12:18:52.266150 containerd[1456]: time="2025-01-17T12:18:52.265586361Z" level=info msg="StopPodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" returns successfully" Jan 17 12:18:52.266409 containerd[1456]: time="2025-01-17T12:18:52.266386978Z" level=info msg="RemovePodSandbox for \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" Jan 17 12:18:52.266460 containerd[1456]: time="2025-01-17T12:18:52.266413629Z" level=info msg="Forcibly stopping sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\"" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.318 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0", GenerateName:"calico-apiserver-6d54fccbdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0751a22-7602-4c7d-a7ee-e530eb41ad09", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fccbdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86db7bfb75e8ae6b92e36c885ffb4571f08127d8a0b9f59c43a84b5c498e11e6", Pod:"calico-apiserver-6d54fccbdb-hj6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid08af2093ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.318 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.318 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" iface="eth0" netns="" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.318 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.318 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.340 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.340 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.340 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.346 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.346 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" HandleID="k8s-pod-network.070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Workload="localhost-k8s-calico--apiserver--6d54fccbdb--hj6qq-eth0" Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.347 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:52.353336 containerd[1456]: 2025-01-17 12:18:52.350 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef" Jan 17 12:18:52.353893 containerd[1456]: time="2025-01-17T12:18:52.353385950Z" level=info msg="TearDown network for sandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" successfully" Jan 17 12:18:52.438313 kubelet[2614]: I0117 12:18:52.438181 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:52.446698 containerd[1456]: time="2025-01-17T12:18:52.446301332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:52.446698 containerd[1456]: time="2025-01-17T12:18:52.446389902Z" level=info msg="RemovePodSandbox \"070a18ed6ef938b97cbca3159891c7cf3518d04257ebc695b8e54a15aae094ef\" returns successfully" Jan 17 12:18:52.954139 containerd[1456]: time="2025-01-17T12:18:52.954064557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:52.988318 containerd[1456]: time="2025-01-17T12:18:52.988227653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:18:53.033912 containerd[1456]: time="2025-01-17T12:18:53.033850155Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:53.067033 containerd[1456]: time="2025-01-17T12:18:53.066949262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:53.068140 containerd[1456]: time="2025-01-17T12:18:53.067862313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.259472884s" Jan 17 12:18:53.068140 containerd[1456]: time="2025-01-17T12:18:53.067903472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:18:53.069347 containerd[1456]: time="2025-01-17T12:18:53.069288537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:18:53.078254 containerd[1456]: time="2025-01-17T12:18:53.078202016Z" level=info msg="CreateContainer within sandbox \"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:18:53.271705 containerd[1456]: time="2025-01-17T12:18:53.271540876Z" level=info msg="CreateContainer within sandbox \"bdda8a5c202d14b6663be4b4b906400056c106c3590082a7ca478f59420a3986\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b\"" Jan 17 12:18:53.274674 containerd[1456]: time="2025-01-17T12:18:53.272528449Z" level=info msg="StartContainer for \"23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b\"" Jan 17 12:18:53.336953 systemd[1]: Started cri-containerd-23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b.scope - libcontainer container 23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b. Jan 17 12:18:53.436853 containerd[1456]: time="2025-01-17T12:18:53.436777137Z" level=info msg="StartContainer for \"23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b\" returns successfully" Jan 17 12:18:53.503961 kubelet[2614]: I0117 12:18:53.503875 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67f856786c-xcmdb" podStartSLOduration=33.459152592 podStartE2EDuration="39.503853708s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:47.024239565 +0000 UTC m=+55.986793890" lastFinishedPulling="2025-01-17 12:18:53.068940691 +0000 UTC m=+62.031495006" observedRunningTime="2025-01-17 12:18:53.503578239 +0000 UTC m=+62.466132564" watchObservedRunningTime="2025-01-17 12:18:53.503853708 +0000 UTC m=+62.466408023" Jan 17 12:18:55.797546 containerd[1456]: time="2025-01-17T12:18:55.797455513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:55.800577 containerd[1456]: time="2025-01-17T12:18:55.800505624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:18:55.806756 containerd[1456]: time="2025-01-17T12:18:55.806701329Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:55.810060 containerd[1456]: time="2025-01-17T12:18:55.809954779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:55.811036 containerd[1456]: time="2025-01-17T12:18:55.810969262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.74161588s" Jan 17 12:18:55.811036 containerd[1456]: time="2025-01-17T12:18:55.811030229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:18:55.813376 containerd[1456]: time="2025-01-17T12:18:55.813208169Z" level=info msg="CreateContainer within sandbox \"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:18:55.829930 containerd[1456]: time="2025-01-17T12:18:55.829875719Z" level=info msg="CreateContainer within sandbox \"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4fdab68d2b3b7d12c48546ff4b282cfdfa4def2ed00994e165bad2e8d55fee65\"" Jan 17 12:18:55.830544 containerd[1456]: time="2025-01-17T12:18:55.830503341Z" level=info msg="StartContainer for \"4fdab68d2b3b7d12c48546ff4b282cfdfa4def2ed00994e165bad2e8d55fee65\"" Jan 17 12:18:55.869836 systemd[1]: Started cri-containerd-4fdab68d2b3b7d12c48546ff4b282cfdfa4def2ed00994e165bad2e8d55fee65.scope - libcontainer container 4fdab68d2b3b7d12c48546ff4b282cfdfa4def2ed00994e165bad2e8d55fee65. Jan 17 12:18:55.986400 containerd[1456]: time="2025-01-17T12:18:55.986335299Z" level=info msg="StartContainer for \"4fdab68d2b3b7d12c48546ff4b282cfdfa4def2ed00994e165bad2e8d55fee65\" returns successfully" Jan 17 12:18:55.987771 containerd[1456]: time="2025-01-17T12:18:55.987735460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:18:56.655698 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:46090.service - OpenSSH per-connection server daemon (10.0.0.1:46090). Jan 17 12:18:56.695743 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 46090 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:18:56.697473 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:56.701539 systemd-logind[1439]: New session 16 of user core. Jan 17 12:18:56.710783 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:18:56.840431 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:56.845737 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:46090.service: Deactivated successfully. Jan 17 12:18:56.848334 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:18:56.849147 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:18:56.850171 systemd-logind[1439]: Removed session 16. Jan 17 12:18:57.811051 containerd[1456]: time="2025-01-17T12:18:57.810973900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:57.811835 containerd[1456]: time="2025-01-17T12:18:57.811761086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:18:57.813174 containerd[1456]: time="2025-01-17T12:18:57.813132379Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:57.815602 containerd[1456]: time="2025-01-17T12:18:57.815561727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:57.816440 containerd[1456]: time="2025-01-17T12:18:57.816394761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.828535794s" Jan 17 12:18:57.816480 containerd[1456]: time="2025-01-17T12:18:57.816446299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:18:57.818418 containerd[1456]: time="2025-01-17T12:18:57.818381381Z" level=info msg="CreateContainer within sandbox \"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:18:57.836383 containerd[1456]: time="2025-01-17T12:18:57.836335528Z" level=info msg="CreateContainer within sandbox \"c2dc82a7cd9f3ac3d2675829578597636430de82b46350057d6962e30bd78da1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8cb8d752d9753f074339922b969c4efc4aed5f0d02d69feff05870d3549ba141\"" Jan 17 12:18:57.837064 containerd[1456]: time="2025-01-17T12:18:57.837044314Z" level=info msg="StartContainer for \"8cb8d752d9753f074339922b969c4efc4aed5f0d02d69feff05870d3549ba141\"" Jan 17 12:18:57.893873 systemd[1]: Started cri-containerd-8cb8d752d9753f074339922b969c4efc4aed5f0d02d69feff05870d3549ba141.scope - libcontainer container 8cb8d752d9753f074339922b969c4efc4aed5f0d02d69feff05870d3549ba141. Jan 17 12:18:57.928126 containerd[1456]: time="2025-01-17T12:18:57.928063839Z" level=info msg="StartContainer for \"8cb8d752d9753f074339922b969c4efc4aed5f0d02d69feff05870d3549ba141\" returns successfully" Jan 17 12:18:58.209086 kubelet[2614]: I0117 12:18:58.209047 2614 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:18:58.209086 kubelet[2614]: I0117 12:18:58.209077 2614 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:18:58.471878 kubelet[2614]: I0117 12:18:58.470906 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b9b6b" podStartSLOduration=35.591277895 podStartE2EDuration="44.470886174s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="2025-01-17 12:18:48.937617221 +0000 UTC m=+57.900171536" lastFinishedPulling="2025-01-17 12:18:57.8172255 +0000 UTC m=+66.779779815" observedRunningTime="2025-01-17 12:18:58.470428559 +0000 UTC m=+67.432982884" watchObservedRunningTime="2025-01-17 12:18:58.470886174 +0000 UTC m=+67.433440499" Jan 17 12:19:00.466997 systemd[1]: run-containerd-runc-k8s.io-23c59b43183d93973b9ed20c7d1ba28722f6106b65ef446166435b8ddc93f14b-runc.bLV9d5.mount: Deactivated successfully. Jan 17 12:19:01.856196 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:38698.service - OpenSSH per-connection server daemon (10.0.0.1:38698). Jan 17 12:19:01.907316 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 38698 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:01.909235 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:01.914180 systemd-logind[1439]: New session 17 of user core. Jan 17 12:19:01.924822 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:19:02.092648 sshd[5712]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:02.097793 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:38698.service: Deactivated successfully. Jan 17 12:19:02.101021 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:19:02.101836 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:19:02.103378 systemd-logind[1439]: Removed session 17. Jan 17 12:19:07.104212 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:38708.service - OpenSSH per-connection server daemon (10.0.0.1:38708). Jan 17 12:19:07.131346 kubelet[2614]: E0117 12:19:07.131301 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:07.139077 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 38708 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:07.141300 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:07.145671 systemd-logind[1439]: New session 18 of user core. Jan 17 12:19:07.153844 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:19:07.265027 sshd[5734]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:07.273772 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:38708.service: Deactivated successfully. Jan 17 12:19:07.275769 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:19:07.277579 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:19:07.284420 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:38720.service - OpenSSH per-connection server daemon (10.0.0.1:38720). Jan 17 12:19:07.285712 systemd-logind[1439]: Removed session 18. Jan 17 12:19:07.316682 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 38720 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:07.318726 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:07.323257 systemd-logind[1439]: New session 19 of user core. Jan 17 12:19:07.333832 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:19:07.608517 sshd[5748]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:07.617037 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:38720.service: Deactivated successfully. Jan 17 12:19:07.619470 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:19:07.620308 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:19:07.629505 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Jan 17 12:19:07.630446 systemd-logind[1439]: Removed session 19. Jan 17 12:19:07.657017 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:07.658668 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:07.663047 systemd-logind[1439]: New session 20 of user core. Jan 17 12:19:07.674890 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:19:09.923357 sshd[5761]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:09.932954 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:50744.service: Deactivated successfully. Jan 17 12:19:09.940226 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:19:09.942372 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:19:09.951719 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Jan 17 12:19:09.953443 systemd-logind[1439]: Removed session 20. Jan 17 12:19:09.989617 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:09.991975 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:09.996814 systemd-logind[1439]: New session 21 of user core. Jan 17 12:19:10.003815 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:19:10.333719 sshd[5785]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:10.342783 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:50756.service: Deactivated successfully. Jan 17 12:19:10.345972 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:19:10.349961 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:19:10.356537 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:50764.service - OpenSSH per-connection server daemon (10.0.0.1:50764). Jan 17 12:19:10.357627 systemd-logind[1439]: Removed session 21. Jan 17 12:19:10.386836 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 50764 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:10.388869 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:10.393086 systemd-logind[1439]: New session 22 of user core. Jan 17 12:19:10.406908 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:19:10.531154 sshd[5797]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:10.535702 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:50764.service: Deactivated successfully. Jan 17 12:19:10.538091 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:19:10.538819 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:19:10.539893 systemd-logind[1439]: Removed session 22. Jan 17 12:19:13.130771 kubelet[2614]: E0117 12:19:13.130721 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:15.543415 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:50770.service - OpenSSH per-connection server daemon (10.0.0.1:50770). Jan 17 12:19:15.575610 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 50770 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:15.577556 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:15.581570 systemd-logind[1439]: New session 23 of user core. Jan 17 12:19:15.592780 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:19:15.710941 sshd[5815]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:15.715446 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:50770.service: Deactivated successfully. Jan 17 12:19:15.718007 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:19:15.719149 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:19:15.720065 systemd-logind[1439]: Removed session 23. Jan 17 12:19:16.318899 kubelet[2614]: I0117 12:19:16.318264 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:18.799482 systemd[1]: run-containerd-runc-k8s.io-bc864cb9f3c52aa2ca3153ba3efa512e69153f06f983d7e91a557b199841f4ba-runc.AeiLpx.mount: Deactivated successfully. Jan 17 12:19:20.724513 systemd[1]: Started sshd@23-10.0.0.101:22-10.0.0.1:59796.service - OpenSSH per-connection server daemon (10.0.0.1:59796). Jan 17 12:19:20.765019 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 59796 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:20.766880 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:20.771217 systemd-logind[1439]: New session 24 of user core. Jan 17 12:19:20.781816 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:19:20.894641 sshd[5873]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:20.897896 systemd[1]: sshd@23-10.0.0.101:22-10.0.0.1:59796.service: Deactivated successfully. Jan 17 12:19:20.900150 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:19:20.901993 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:19:20.903295 systemd-logind[1439]: Removed session 24. Jan 17 12:19:25.910621 systemd[1]: Started sshd@24-10.0.0.101:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Jan 17 12:19:25.955074 sshd[5893]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:25.956946 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:25.961460 systemd-logind[1439]: New session 25 of user core. Jan 17 12:19:25.967053 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:19:26.092049 sshd[5893]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.097887 systemd[1]: sshd@24-10.0.0.101:22-10.0.0.1:59810.service: Deactivated successfully. Jan 17 12:19:26.100371 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:19:26.101139 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:19:26.102345 systemd-logind[1439]: Removed session 25. Jan 17 12:19:31.102461 systemd[1]: Started sshd@25-10.0.0.101:22-10.0.0.1:48006.service - OpenSSH per-connection server daemon (10.0.0.1:48006). Jan 17 12:19:31.131386 kubelet[2614]: E0117 12:19:31.131317 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:31.135445 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 48006 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:19:31.137277 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:31.142468 systemd-logind[1439]: New session 26 of user core. Jan 17 12:19:31.150875 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:19:31.262407 sshd[5928]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:31.266584 systemd[1]: sshd@25-10.0.0.101:22-10.0.0.1:48006.service: Deactivated successfully. Jan 17 12:19:31.268810 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:19:31.269407 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:19:31.270411 systemd-logind[1439]: Removed session 26. Jan 17 12:19:33.131211 kubelet[2614]: E0117 12:19:33.131164 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"